The best feature of a SSD array you guys didn't even talked about it, that is the rebuilding time of SSD array compared to HDDs, especially when is the period that you are more at risk of losing data, a 10TB SSD array can be rebuilt in a couple hours compared to days in HDDs.
I have 8x12tb HD's (ZFS via truenas) and a rebuild never took days. IIRC the longest rebuild may have taken about 12 hours which is still comparatively slow compared to SSD's but nowhere near as bad as days.
@@MarcioZbs Thats not the issue, if I have to rebuild an array I must prefer to do it in a couple hours than in days, where the risk of more drives failure is greater witch on that case I have not only to do the array from scratch but after waiting days again to copy all the data from the Backup to the new array. End of the day, if yours hot storage is not greater than 10TB is much better and simple SSDs as a bonus no need spend money on SSD cache.
Rebuild depends on the CPU rather than the disks. Those long rebuilds are aspects of CPU performance calculating all the blocks based on the parity data
hybrid makes sense if you are running anything ritualized on your NAS, editing files on your NAS, or if a lot of users are accessing the data then SSDs make sense, IF you have a network fast enough to really utilize the extra speed. But just for pure storage HDDs are really great, long life, cheap per GB, all the current HDD NAS drives today don't really eat up much power when idle. Currently the cheapest 4TB ssd is $190 a Seagate Ironworlf 4tb is $80. If you can pay for a 10TB home network, you most likely have "server closet".
10gb networking can be had for less than that 4TB seagate SSD if you don't mind buying second hand. Mellanox cards can be had for $20-$30 and old enterprise switches in some cases for under $100, some of which don't sound like a jet taking off while running. I've been running 10gb at home for a few years now and I doubt I spent more than $200 to get everything up and running.
My system consists of a RS3618XS with 7 X 10TB WD Red Pro drives and a DS 1821+ (for backup) with 5 X 16TB WD Red Pro drives, all in RAID 5. At first I utilized the 4 X 1Gbps ports on each device but found the transfer speeds to be awful. I purchased a 10GbE NIC for each NAS and disconnected the 1Gbps ports on both NAS boxes. Much improved. I installed 2 M.2 NVME 480GB SSDs in the DS1821+ and 2 standard 480GB SSDs in the RS3618XS as cache devices. Now the system just rocks. Transfer speeds between the NAS boxes are in the 600MBps to 900 MBps range. Copying files on the same volume with BTRFS format and the speeds are in the 26GBps range! One thing I noticed is that 7 drives in RAID 5 collectively yield over 1 GBps consistently.
So my take from this video is SSD for NAS (in a raid 5 config) for performance and speed... and HDD (to a secondary system using HHDs) for cold long term storage.
Even if an all SSD NAS is best for your situation you have to concede that RAID is not a backup. It doesn't matter if you use and external drive or second NAS as your backup. The best option for that backup will be hard drive(s). The speed is mostly irrelevant. In most situations you can put it somewhere you won't hear it (off site even better). Even if you live in a 500sq foot NYC studio apartment you can set it to start backing up at the same time as you alarm clock. The backup will be larger and cheaper allowing for more revisions.
I used an older desktop computer with 3 NVME SSDs, and installed True NAS on it for my home NAS about a year ago. 2 SSDs for data storage, and one as the boot drive. It works great, twice as fast as the QNAP system I was using before. And it's never crashed. I use it primarily for backing up and syncing files on my 3 other computers. I have a 1 Gbps ethernet network and I get about 90 Mbps transfer rates to/from the NAS, which is just fine for my needs. If you have an older desktop computer sitting around, this is an excellent way to put it to good use.
There's a case that can be made for both SSD and HDD and which is better one just comes down to personal needs and what one's priorities are. For the most part though the choice will still come down to price whether its based on a straight cost/TB for media storage or one's ability to buy that 12-bay NAS to cover for the 8TB cap on an SATA SSD. The more prohibitive the price tag of the NAS is going beyond 4-bays the less likely one is to deploy SSD. FWIW my NAS all have SSD as my overall need of TBs actually isn't that great and 4TB SSDs are a great match for me but my NVR is filled with HDDs. BTW, one thing that wasn't mentioned was size. Look at the physical size SSD solutions are in comparison to 3.5" HDD. The Synology DS620slim is a perfect example of compactness that's even easily portable.
1. Caching 2. Tiering Would make interesting talks relating to SSDs. 3. Back up 4. Deduplication 5. Glacier storage. [To disks] 6. Glacier storage. [To tapes] 6. Off site replication. Would also make interesting talks.
SSD/NVME for apps and scratch space where temp files can be written. Then, the hard drives can spin down to save power and noise. If you get up to serving more than a family or its more than two or so video editors, then the hard drives get more useful as they just get hit more and can't spin down.
Thank you both... I think we can walk away from this knowing that both storage technologies are significant and can be used in various NAS situations which make them both very suitable for Network attached storage. Thanks again for all your efforts.
I learned the hard way how noisy large hard drives are. When my WDMyCloud stopped receiving updates, I bought a Synology NAS with 2 x 10 TB drives by Seagate. The WD was pretty quiet in my bedroom but the Seagate drives are very noisy, so much so that I had to hibernate them during the night. Noise level is probably twice that of the 2 TB WD drives I had in my old NAS.
i've heard a completely different opinion, seagate are the quietest of the bunch, i even asked on synology reddit and chat gpt, and both came out with the same result
@@NicolasSilvaVasault Entirely depends on the batch. NASCompares did a video about drive noise and the WD Res Plus were the quietest, as have many other mentioned about how quiet they are. Doubt it honestly matters, buy whichever is more available.
Thanks guys, that was helpful and interesting. I think really it depends on what's the purpose of your NAS, if it's for a home server or for a big business server where cost per TB is important and noise is not an issue as the servers will be in storage rooms/centers. I think for a home server it's best to have a mix of the 2 - nvme ssd for quick access data and HDDs for large data that you don't touch too often and can just stay there because HDDs are way cheaper and bigger capacity, and you can easily offload your junk from the SSDs and clear up space for your current file needs.
This video couldn't have came a better time for me. I'm just about to "build" a NAS for my companys webapplication for storage and I needed a few more ideas why we should buy a Synology NAS with SSD caching and not just hard drives. Spot on!
In my 10 years old PC, i have a M.2 SSD capped to 1 GB/s because of the mobo (Gen 2 slot) and with a smallish buffer, and the speed drops to about 100 MB/s when the cache is full. I also have an array of 8 HDDs (EXOS 8 TB SATA) in RAID 5 able to *sustain* 1.6 GB/s reading or writing without using any cache. The max speed i measured is a bit less than 6 GB/s staying in the cache (Gen3 x8 controller so theorical limit is about 8 GB/s). Having SSDs in a NAS can work if you don't have a bottleneck somewhere... For caching, better use a ramdisk.
@Reanimatedyt Each drive uses its own Sata port. 8 combined channels is about 4.8Gbps. A benefit of using HBA controllers is the removal of the chipset to CPU communications.
iv got 6 2.5in 250gb ssd's as a read right pool i managed to burn through 1 set in little over year but this is me righting the whole drives worth 2-3times full per week ish
This is like watching the Smith Brothers of NAS's. Great video because I just added my first M2 NVME to my QNAP, in a separate storage pool, for faster VM's; I'm all ears. 4:30 Robbie, using your idea of a hybrid storage would that not mean that on a 4 bay NAS, you would not be able to use Raid 5 or SHR ? The NAS with the HDD's should always be stored in the guest bedroom. Guaranteed that your in-laws will never stay more than 2 nights.
You'd need a NAS system or build with hybrid storage in mind. Many units have multiple bay types or nvme slots to accommodate. TrueNAS Scale can also natively use ZFS to tier the storage. Using various cache arrangements.
Thanks for starting a discussion of hybrid & tiered storage - this approach makes a lot of sense and is an ideal best of both worlds approach of course, and it's of considerable interest to me. I've been searching your website for this and so far, I have not found further discussion on this as a stand-alone video (or series). Part of the challenge for me so far in exploring this is that vendors seem not to be very upfront about the degree to which their products support true tiered storage (as opposed to caching) in particular (they may consider this too technical perhaps) If you haven't already, please consider creating a video or series of videos that discusses more about some of the current product options particularly in the entry and middle level ranges for tiered hybrid storage NAS or DAS. If you have already done a video which covers this recently or know of someone who has, it would be most appreciated if you would please post a pointer - thanks! As I'm a fan of SSDs for noise reasons and my storage needs are (relatively) modest, finding a solution that supports a several types of RAID over mix of SATA SSDs and NVME SSDs would be particularly interesting.
We use a lot of NAS-machines, harddrives and SSD's. In various forms and sizes. The cost per TB (or GB, whatever floats your boat) for SSD's are indeed quite steep, especially the enterprise versions. The technology behind flash memory is quite old (Invented in the 1980s by Japanese engineer Masuoka Fujio at Toshiba) but has not really improved that much, (compared to other storage media) other than cell-density per chip. (which is not always a good thing). What has significantly been improved are the controller-chips that drive those flash chips. But the technology behind flash-chips has not really improved much. There is one thing I do believe is omitted in this video, and that is heat/cooling of SSD's. The higher the speeds, the more heat is going to be dissipated from SSD's. So, cooling, e.g. forced airflow, heatsinks (in general: thermal design) is a thing with SSD's. And another thing omitted in this video, is that a SSD can wear-out very quickly if not careful about your application(s). We have had SSD's wear-out to 83% within a year(!) due to the application not being controlled it was using a SSD. In general I would say, SSD's can be a benefit in certain applications but they won't always be able to replace mechanical HDD's at the current state of technology. Unless you indeed take a second mortgage on the house and go for enterprise-level SSD's... There is still a long way to go before SSD's can replace mechanical HDD's at more affordable prices (and capacities).
It won't be very long at all. QLC has already made a pretty compelling step forward for SSD Data Drives to replace HDDs. Capacity-wise, there is still a price premium on SSDs, but they beat out HDDs in all other respects, to include lifespan. With PLC in the works, and further NAND cost reductions an inevitability, HDDs are rapidly reaching obsolescence.
@@redslate I personally believe there is still a long way to go, before (other than enterprise-level SSD's, i.e. extremely expensive) SSD's can truly replace HDD's. Our metrics, in our environment, have shown that HDD's (Red's, Red Pro's and Gold) easily outlive SSD's. We kept several NAS running where the HDD's have been on for over 10 years and no signs of any deterioration. Whilst the density-increase can indeed make SSD's more & more appealing, it does affect their life-span (and other metrics). The wear-and-tear on a SSD can come all of a sudden, and in our experiences, not much time left when the alarm-bells start ringing with SSD's. (we call it sudden-death syndrome, a well known issue with SSD's). With HDD's, when the alarm-bells do go off for issues with said HDD, you have more ample time to react to the detected issue and resolve it. QLC helps to move SSD's forward in these arena, but also implies, at times, slower speeds (compared to TLC) and lower durability. Unless intelligent software/firmware is used. (required for steady-state situations, where QLC is really struggling in). We also use storage-solutions where the SSD's are directly controlled, instead of an onboard controller, and there you do benefit fully in almost every sense of the SSD's technology. (speed, durability and recoverabilities etc) But that is by no means a NAS and hugely expensive.
It likely takes *years* under ideal conditions. Consumer SSDs are rated for a _minimum_ of one year unpowered data retention, so all the rumor/conjecture about having to boot your machine monthly is really quite silly. HDDs lose data over time too. Nothing is 'permanent.' Everything fades eventually.
To anwser whats inside of my home nas (not corporate storage): - hot storage - based on samsung 8tb sata drives x12 - giving me a healthy storage plate for anything that I need and place to backup to with decent performance. - cold storage - based on WD enterprise 20tb x 16 drives (yes those are loud as hell) giving me years of backup dumping ground, slow but why do I care if shifting backup from hot to cold is a background task that is a linear copy leveraging spinning rust strong point.
I go hybrid: my 16 bays are 10 rust, 6 flash. they are completely separate zfs pools; the ssds are served as iscsi / VM storage / git repos while the rust is meant for media / Plex. plus using zfs you can leverage NVMe to accelerate the pools in a variety of ways. fully committing to flash or rust has too many compromises
Good one. Beside the fact a rebuild is much faster with SSD (and scans for parity drives) you missed the point the SSD longevity doesn't depend on the TLC (QLC or else) itself alone but more on the controller chip, which is said to die much earlier than TLCs. So SSD die from one second to another without warning because the controller chip just dies. What about that? Until now I am very mixed up to decide which way to go. Maybe Enterprise/NAS SSDs do better?🤔
Keep backups, and use RAID for redundancy, then an SSD controller dying suddenly wouldn't be an issue since you can rebuild the array or restore from backup, and warranty would replace/refund the failed drive.
We should not forget SAS/SSD´s, internet is full of 2nd hand 3.84TB ssd´s from enterprise storage solutions (3-Par, Primera, NetApp, PowerEdge etc.), they draw 3.9W each and most often have 98-99% life left when you buy them cheap. Reformat to 512 sectors and voila! It would be very interesting to have more SAS/SATA 2.5" only NAS reviews.
Really helpful discussion guys. Asking around with your average I.T. people they will usually give you some sort of poor answer but still act confident about it anyways. I'm about to now buy 2 HDDs for my new NAS...
My regular NAS has 4 spinning disks. I wouldn't put in SSDs unless the NAS unit was designed for it. For instance utilizing ZFS for tiered storage or segmenting shares based on usage characteristics.
Last year Amazon had the 4TB WD Red SSDs for $150 each. Picked up 6 of them for my 1821+. Also picked up an Intel D3 S4610 to record surveillance footage on. Huge speed difference and SSS runs much better. Also, after swapping fans with noctuas, the 1821+ is nearly silent now (apart from PSU) so could put it in the living room without noise bothering anyone.
I saw that, and I picked up a single D3 3.8TB to do some validation testing. By the time I got to it and thought yeah, this will be great, the price shot up from ~$180 to about 300 😭 Still waiting for cheap enterprise SATA SSDs. May never happen like that again 😞
There are like the Fujifilm LTO-8 Ultrium Data Cartridge LTO8 which are tape inside - 12 TB. I don't know a lot about them, just that they exist. Edit - the hardware looks pretty pricey for individuals though.
Will transission to ssd this year. Mostly becase harddrives dies on you all the time, but good brand ssds last practically forever, and are less prone to failure from movments and temperature. Aside from all the other positives like power, sound, speed etc..also migrated from nas to computer with unraid at the same time since nases and network has become too expensive at the same time as they are too limited in performance and expandability. Bonus is that streaming has become so good now day that you can make a vm and remote from anywhere and get lagg free dektop performance on any device.
Agree 100%. If you are a company or an advanced user you can deal with redundancy, raids etc. For the "98%" of consumers a SSD nas is literally "set and forget". No disk based system can do that for you. Your data is there and safe when you need it once in a moon - as your main hard drive is SSD anyway.
Heh guys, nice video! Regardless of what you said I think there is insufficient attention given to SSD reliability. We have many years of large scale reliability data about hard drives, but very little reliability data about SSDs. i’m not taking a position here. I just think it’s a concern.
thanks, lovely discussion. I think it is all about the use case... if you are a crazy video editor 8k etc... well.. ssd... for me it is not the case... my nas will be used as a glacier solution for family photos, videos + few local small VM + few containers... for that i find having large HDDs still being the best case ONLY if they are proxied by few SSD (mirrored) to catch the "heat" of the moment when people(aka familiy) are throwing their videos/photos in panic mode... and then slowly redistribute it to the HDDs in the backend... then spin them down... with the prices of ssd going down i will most likely move a tier of SSD up and will go with 2-4 ssd with up to 4 tb per drive... plus bunch (4-6 HDD) in the backend for mass storage.
Not even 8k... I've been editing 4k60 for the better part of a decade. You *need* solid state memory, and not just for editing. Dumping data efficiently is a time-consuming process that saturates SATA (and HDDs can't sustain fast write speeds for large transfers). Even 1080p60 benefits greatly from SSDs.
I’ve learned so much from this channel after accidentally supporting the UGREEN NAS on kickstarter. So stoked I actually got it, even more stoked that this channel exists to educate dummies like me
I kinda see the arguments regarding the noise of a NAS equipped with loud hard drives if the NAS is on the desk next to one's workstation but that is a worst case scenario. Modern Pro NAS drivers are rather quiet. I have a 10 gigabit network at home so my 10 gigabit equipped NAS can be 15 ft away from my workstation in an open rack (buy a closed rack is noise is a concern). It is a 6 bay with 7200 rpm Iron Wolf Pro drives (and is paired to an identical unit for backup). Noise is not an issue. I ran my previous NAS and NAS backup units under my desk and again, noise was never an issue. Speed is not an issue because I am using 6 drives and I see peak performance of around 700 MB/s. If I need something faster than that for work files then I'll use my workstations internal M.2 storage and then bulk copy the work over to the NAS once I am done. This video should have been full of benchmarks for noise, performance, etc. As there were none this is a lost opportunity. The author rarely benchmarks anything beyond Plex so who is he to comment so stridently regarding noise. I've never had an issue with NAS specific drives when it comes to noise and I've been them since the first Western Digital Red drives came on the market. I recommend the author setup a 6 bay NAS, equip it with 6 large Iron Wolf Pro 7200 rpm drives, attach it to a 10 gigabit network and then go to town on the performance and audio level testing.
I have 28TB of disk. 24TB is is HD and 4TB is flash. I use HD for long term. I can max out the HD interface on all 3 HD drives. I have 4TB of SSD. The life span of the SSD is 5 years, my HD's have a 5-year recovery and over 5 plus year warranty. My drives are quite even when being used and only issue is heat. RC
I've loved my Synology, but it's finally moving 'off-site' to just serve as a backup target. The NVME ssd's did help with the docker containers I had running on the NAS, but did hardly anything for my workflow. 1Gb ethernet just doesn't cut it for 4k video editting (raw footage on the NAS, proxies local). I've tested making an NVME SSD share on a linux server with 10Gb ethernet and it's entirely possible to store proxies on the NAS too. So the Synology is going to go be a backup target and an SSD+10Gb NAS is currently being sourced (main issue is finding a low power CPU + mainboard that supports bifurcation on the x16 slot to 4 times x4 nvme.
Show me a NAS with 10 manageable bays. Otherwise I am using 8 x SATA ports on a mini itx in addition to USB JBOD disc enclosures attached for archival long term storage.
I wanted some data for storage. Like SHTF storage. Medical, engineering, agriculture, type stuff for when there is no more food, water, clothing or shelter but no doctors, farmers, craftsmen or the electricity to run all the equipment for those people. Started a NAS. Got excided and bout some good 6G HDD that the NAS does not except along with some SSDs that work either. Did more research and found that the mean failure for both are no way long enough. I now see how ancient societies could have lost everything.
Have a synology 918+ with the 5-bay expansion and roughly 50tb. Also have an older desktop running proxmox with icydock bays and loaded up with SATA ssd drives to ack as my local backup for critical files. They're both connected to a managed Cisco switch via lacp.
Thinking about a PCIE Xpress 4.0 x16 to 4 m2 adaptercard to deploy 4 x1600 Optane 118GB drives for cache. But then what do I need this speed for ? Sitting on 5 disks in raid 5 and the bottleneck is the 2.5gbit network. Only points worth mentioning that SSDs are worth going for are electric power savings and noise levels. Edit: gave this a second thought - if I might run into the scenario where I need high random read/write this might be the way to go Warning: the cards that offer pcie to 4 m2 slots need the mainboard to support bifurcation and here specifically 4x 4x 4x 4x I use the gigabyte mc12-le0 that supports it (to give you an example)
Hard drives are really the only option if you want to have a capacity that makes a NAS worth having. Plus, SSD's will be kneecapped by the network in most homes. I have four 12 TB WD Red drives in my NAS (will be going with Seagate Iron Wolf Pro drives in the future), and when I first got the NAS, the noise drove me fucking bonkers. I got these 4 roughly inch thick foam pads to set the NAS on just to keep it from being able to vibrate the desk it sits on and so on. But I've been running it for four years now, and I don't know if's it's just me having gotten used to it or that when I first had it it was still doing a lot of setup work, like Plex doing its thing etc., but I rarely hear it now. I only really hear the drives work hard when there's a S.M.A.R.T. test going or if as I did recently I do a complete reorganization of my media and Plex decides it wants to rescan everything, so the drives annoyed me for a solid day.
It's going to depend on your storage requirements; however, the market is currently shifting. I've build my NAS based on FreeBSD/ZFS (RAID10) and 2.5" 7200rpm drives. It's been running well over decade and the pool is very capable of saturating a 10Gbit NIC with all the drives. Hard drives are no slouches when pooled together and having a SSD ZIL device for writing helps too. I wanted it to be lower power thus the 2.5" drives and over the years I would add drives as I needed more space and/or replace the failed drives. I ended up with an 18 mirror setup or 36 drives in the pool. For the past few years the replacement drives have been SSD as 2.5" 7200rpm hard drives are not produced anymore and/or of poor quality. This year I noticed another trend; NVMe drives are cheaper than 2.5" SSDs. I'm in the process of retiring my entire array and replacing with an all NVMe setup. My storage requirements is not that large as still running some 250GB drives that are over a decade old -- I've had more newer drives fail than older ones. So five 4TB sticks will do. However, I also maintain another system setup with 3.5" Hard drives for taking backups on a regular bases -- a must have and hard drives are still the best solution for backups. The SSD market is currently unstable as far as interfaces go; SATA days are numbered and NVMe is a poor interface for storage as it is not hot swappable. My Mac and the new AMD motherboard for the build has zero SATA ports. The enterprise market has some very interesting (and expensive) SSD storage technology as NVMe is no go in the enterprise.
If you have a NAS for say mainly a media server like Plex many of those files are written once and that's its so SSD durability can be pointless in those circumstance however if you use an SSD as a HDD cache then you want durability.
yeah most people overdo the durability issue, most people will NEVER kill an SSD in a NAS or even be close to it... since they write data and never really delete it, they just read it many times... if you use it as a editing buffer when editing 4-8K video then sure you NEED durability but for most ppl you don't
One thing I had noticed on storage forums and reviews is that SSD's tend to fail without much or any warning. Usually a controller failure. Compared to HDDs, which more often seem to give some warning signs as it's a mechanical device. Of course I've seen HDD's just outright fail too but seems a bit less common.
Delayed, they pushed the launch back and said improvements and updates to the software were coming, so in fairness I delayed it. The Q&A is being recorded this week though
thanks a lot@@nascompares ! Besides the software what do you think of the hardware? especially for the kickstarter price I am really impressed with the package. Also any ideas if one will be locked in to their software? or is truenas or another OS an option?
I have two 1TB NVMe SSDs for read/write cache RAID 1, they make it fly, my spinners are two HGST 250MB cache CMR 7200 RPM 8TB helium drives in RAID 1, it's all in an Asustor AS6702T Gen 2 this little NAS gets up and goes, oh it has 16GB of RAM as well.
Power consumption is an issue, particularly in the UK, where we are at the mercy of the energy companies and the government in their pockets. Also, how often is a nas accessed, particularly in the dead of night. Another important factor is the type of data you are storing. Many people get hooked up on raid this, raid that, zfs etc etc. SSD's may not be as well suited to a parity or other raid sysyem because of the write overhead, but based on what data you have, do you even need raid? There is the saying raid is, not a backup, so you backup important data and use something like mergerfs to store less important stuff. If you're fortunate enough, have a tiered sysyem, one on-demand server with spinning rust for your vast arrays, and a low power all ssd sysyem for most access. Sync between the two. This way, you already have 2 copies of your data, coupled with an off-site backup of the most important data. I find it difficult not just from an energy standpoint but from a common sense point of view to justify running a 24 or 48 bay nas 24/7 for occasional access. There is an argument for a server that does multiple duties like pfsense, unifi, but those things will run happily on a low powered ssd rig.
damn that was intense. well done, i definitely just got a crash course. so if my main concern is playing 4k HDR movies on my NAS (file size around 24gb), will a system with hard disk drives suffice? right now i have plex set up with ubuntu on a raspberry pi 5 and it can't play such a file without freezing every 20 seconds to catch up.
If I wanted to migrate to SSD in my 4 bay u it, could I start by simply swapping out 2 hard drives with 2 SSD’s ?? And eventually get to all 4 SSD’s . I am configured with Synology RAID
Just finished the 4th iteration of my server/NAS#3. Last two HDDs have been evicted, 14 consumer SSDs, 2 u.2 intel nvme drives. 62TB raw, ~50TB usable and I don't regret it. I do have a 5 bay HDD NAS by definition 36TB usable, powered down most of the month, comes up just for syncing and a scrub. I hate noise and I love watching my 10gbe network get saturated. PS - ASUS screwed up the design of their all m.2 NAS. That should have had either a PLX or Asmedia PCIe lane switch in it.
SSD price have gone up loads, I stuck a 4Tb Samsung drive in my server mid last year to replace a 4Tb spinner that had started chucking out SMART errors,cost me £163, is now £240...
I bought a couple Crucial 2TB SSDs last summer on sale for $59 each. You’d be lucky to find them on sale for twice price now. I kick myself every time I think about it for not buying a couple more.
Tbh these are just budget hoodies and custom text. No plans to sell them. That said, you were right they would be popular, as they get regular shout outs..more than I do!!!
I had an Intel NUC 8 as my TV PC and a mini-tower PC (mATX mobo) as my Plex server. The NUC had a NVME 1GB SSD and the Plex server had three 6TB 3.5" HDD's with a NVME SSD for the OS. The HDD's cost about $150 each when I bought them in 2020 and 2021 The NUC was on the TV stand and the HDD based Plex server PC was a few feet away in a semi-closed cabinet. I let the PC with the HDD's run 24/7. If I wasn't watching TV or a movie I could hear the HDD's. I ended up turning it off when not actively using it. I made a combined TV PC and Plex server in a Cooler Master Elite 110 SFF case with a Mini-ITX mobo and five 4TB SATA SSD's. It is small enough and quiet enough to place it on the TV stand. It has five low noise fans in it. I only hear the fans when it's running software like Handbrake. When I bought the 4TB SATA SSD's in 2022 and 2023 the average price I paid was around $200 each. Silence is sometimes worth the extra price!
I feel like if I had plenty of money, I'd have an Asustor or similar in my office, and an HDD solution in another room that I don't sleep in (if I needed that much space and it added up to less than just using bigger SSDs). I don't want the noise, portability is of minimal use to me but it is easier to move it around the office and won't cause it any bother, and it uses less power. Realistically, I have a WD HDD based home NAS and won't be changing it for years unless it completely dies on me.
Speaking of durability on SSDs, the issue for me in the homelab has always been Power Loss Protection. 99% of SSDs available don't have it. Those that do have it are either hard to find and/or used enterprise gear, or are in exotic form factors like U.2, PCIe, etc. There are very few M.2 and many motherboards don't support the 22110 size. Because of this, I've had in mind to buy DRAM cacheless SSDs so that's not an issue. Sure, you get lower speed, but your network is likely not going to be able to handle those speeds anyway. Those are cheaper, and you don't have to worry about power loss.
The main issue I see is the limit of size of SSDs. It means that I can't get that big of an array to store all the data. We need to see at least 10TB SSDs at a reasonable price before I could ever consider them.
I have had better experience using SSDs in my NAS. 1. Faster 2. More reliable - I had 2 HHDs fail over a 7 year period, the first after 3 years. I have had SSDs running for 5 or 6 years and no failures yet. 3. Quiet. HDDs generate more heat, and thus my NAS fan ran at higher RPM. With SSDs I never hear the fan. Also, with HDDs I could hear them spin up and hear subtle vibrations. SSDs are completely quiet.
The HDDs noise...Why Synology runs OS on HDDs, it buzes the first drive all the time. Can i set OS to primarily run only on SSD in 2nd bay. For example I have a nas for backup, does backup and keeps HDDs on untill shutdown, no apps installed, hibernation set to 1h, if shorter it cycles THE drive, so noughty Clean DSM install, no errors, HDD hibernation is useless in this case for main drive.
For boot, you need ssd, and for cache you need another ssd, the rest are spinning. Unless you have nvme slots on motherboard, then you can add to the storage.
i dont see any comment here regarding maintenance costing is replacing dead drives, im setting up my first new nas and this is the question im asking myself, is if i buy all HDD, in 5-10 years, it will be suspected that i will need to replace all drives, add to that that i will likely have this array until i die, that could be 50 years of upkeep replacing everything atleast 5 times now compared to using long term archive like storage, what are the maintenance costs of full ssd? will i get 20 years from a drive? 5 years?
Very interesting… I think the big selling point for SSD based mass storage will be in the domestic environment. Audio / Video buffs will really appreciate the silence of SSD’s and a “passive” NAS in the room being used for entertainment… just my tuppence worth… 👍
I don't really get the lure of putting SSDs in a NAS when a five bay NAS with HDDs will hit transfer speeds that can choke a 5GbE link never mind anyone still on 1/2.5GbE. I'm not saying it won't benefit some people but most people aren't spending thousands on their NAS setup so the benefit of SSDs is still very niche (at least in the home space).
I just wondering. In a qnap 4 bay drives, can we do 3 drives (raid 0 for speed) and the other 1 (for back up the other 3 that raid 0 a.k.a raid 1) just saying like 3 4Tb drives and 1 12Tb drive
Pretty sure if you get a 66TB SSD NAS in the size of the 4 bay he's pointing at, you're gonna get quite a lot of noise out of that too, simply because of the cooling required.
durability: I have 3x8tb qvo drives in raid5/shr, spec is 2880tbw each. I write less than 250gb/day (from smart) - that works out at 31 years life: I don't worry about durability, but I love the silence.
Now is not really the time, the time was 4 months ago, prices going up, wait 6 months at least. Only other option is not fully populating right now. Talking about asustor flashtor 12.
Prime Day and Black Friday/Cyber Monday had some incredible SSD deals. 4TB Samsung 870 EVO $169.99 8TB Samsung 870 QVO $319.99 2TB Samsung 980 PRO $99.99 2TB Samsung 990 PRO $129.99 4TB Samsung 990 PRO $249.99
Can you discuss the issue most ppl have - I have a bunch of HDDs from multiple 1GB up to one 10GB ... should I move all the data from the "small" disk to the largest disk ?
I think my only criticism about the SATA SSD being limmited to realistically 2-4TB is that it ignores the kinda people who are looking for a NAS not as a horde anything and everything vault, but looking at it as a convenient way to share files between multiple machines. You give my parents 20TB they will never fill it. - They haven't even gotten close to filling the 500GB SSDs they have in their PCs. - But a NAS would be helpful in letting my fathers music collection be instantly accessible from any machine in the house, or as a backup for all my mother's photos (that again, hasn't even come close to filling up her machine.) Realistically if I were to get them a NAS, I'd probably get them a 2 bay NAS, and assuming I have the fixed budget for disks, I'd honestly debate using SSDs, since again the storage capacity wouldn't be a factor, but noise could be.
Nothing about the power consumption of ssd vs hdd in an 24/7 system like a NAS. I have a 2 bay QNAP with 2x1TB WS RED NAS SSD only for the power consumption reasons.
Bigger HD, more noise, man was I disappointed with the new WD purple 12TB. This disk resonates soooooh much, that I can hear it humming on the next floor of our house. The Synology NVR1622 I installed it in is placed on a shelve that is mounted on a wall in the room just below our bedroom and the wife is complaining about that disk since she hears it as soon as she puts her head on the pillow.
It's actually quite funny how people keep fixating about having faster storage in their NAS, spending thousands of pounds / dollars - and then just plug it to the network with 2.5Gbe network ...
Or enabling smb3 multichannel and using both of the 1Gbe NICs, in my case 2,5Gbe NIC integrated into z490 motherboard have some stability problems at 2,5 mode, smb3 multi works also with usb NIC
Its all about the speed SSD's and Nvme are great "BUT" your nas needs a minimum of a 5gbe port or 10gbe to bother. The newer standard of 2.5gbe is still old mech HDD speeds so i still call out to all NAS manufactures that 5 or 10gbe eithernet is the MINUMUM dont sell products advertising SSD;'s or NVME slots when they come with 2.5gbe or even 1 gbe ports.
Interesting point about data-timespans; I just revived a couple of workstations that haven't been plugged in since 2012 and 2013. ==> If there were SSDz involved hmmmmmm probably wouldn't have worked ??
im barely a minute in and all i gotta say is as a scavenger i don't care as long as the space is right. my nas is made up of old hdds and ssds and using btrfs it doesn't matter to me, i've already written off speed.
There was some misinformation about how long the data will stay in SSD vs. HDD if left unpowered. HDDs will not hold data for 50 years like claimed in the video. The situation with HDDs is similar to SSDs where the data will have to be refreshed every couple of years to keep it intact
IMHO, in HOMElab, nas ssds are not for speed, but for power efficiency and noise. Yes u can park hdds, but its much more tricky. So frustrating that ssd prices went up. I bought 8Tb samsung for 320€ last year and 2Tb nvme for 69€. And now "progress" brought us to 50+€ per Tb again :facepalm:
Toshiba 4TB Enterprise Hard Drive $59. WD 4TB NAS SSD $299. My NAS has 5 drives in it. That's $300 in HDs or $1500 in SSDs. If I was running a business with a lot of people hammering the NAS and needed the speed, I could justify it. But for home use that would be a big NOPE.
@@raya4633 As far as SSD's almost any drive will be better then a disk based NAS drive. The only reason in the first place to have a NAS disk is because of the harsh ware from reading and writing over and over. The second reason is NAS disks are better for simultaneous writes which SSD's do no need because all areas of the disked can be accessed at any moment and simultaneously. So the NAS SSD's are kind of dumb marketing that makes no sense other then higher quality silicon. Also for now putting anything high then a gen 3 SSD in a NAS is a waste. Because of processer speeds and lane caps. Example the Asustor in this video should only be used with low tier m.2 because the lanes are capped. You can get some gen 3's on the cheap with 4 TB capacity.
2 min 10 Seconds..... NO no, ssd's are going UP, up, up, Nand is getting very $$$, its been going Up , not down.. I am using 2x WD 18Tb Red Pro and 2x 1Tb SN850x's
personally i stick to drives because in a array and the whole system limited to only 1gbit 5 drives are fast enough. hdd making around 150mb to 300mb in a sec so 3 to 5 drives are fast enough if you consider the file transfer overhead of up to 10%. ok ok i see drives have one big problem this is response time and iops but for personal use this does not really matter. i use a 5 array with one parity. i use my server over the network with my phone or my laptop. i also have other services running on my server besides a nas they all accessing the drives constantly. my personal home internet connection is only 100mbit so there is no advantage having a ssd on my home network there is only a 1gbit switch and my router is also 1gbit but can handle intern only 300mbits but this does not matter i dont use the internal switch. also my drives spinning constantly now over 7 years i had only one failure and this was not the disk array it was the ssd i use for my os on the server. but i always have a 1 to 1 copy of that drive in case something goes wrong and so a had only change the drive and reboot again and every thing was back online.
I went for a different hybrid solution : my old DS2015XS+ fully loaded with 16TB loud cheap WDs in the garage for backup and big data I do not need dayly. It is off 6/7 days only on for the weekly backup session. Then a DS1522+ +10Gb NIC fully loaded with 4TB Samsung SSDs which were €180 half year ago for 24/7 data , media, etc. Both run SHR configs 1 disk redundancy.
so were are all the 24TB SSD? seems SSD's can be made in this size if they make them the size of a HDD but that would mean killing all HDD manufacturing so why would they do that.
The best feature of a SSD array you guys didn't even talked about it, that is the rebuilding time of SSD array compared to HDDs, especially when is the period that you are more at risk of losing data, a 10TB SSD array can be rebuilt in a couple hours compared to days in HDDs.
Having a RAID array does not exempt you from making backups. It's your own fault if you loose data...
I have 8x12tb HD's (ZFS via truenas) and a rebuild never took days. IIRC the longest rebuild may have taken about 12 hours which is still comparatively slow compared to SSD's but nowhere near as bad as days.
@@MarcioZbs Thats not the issue, if I have to rebuild an array I must prefer to do it in a couple hours than in days, where the risk of more drives failure is greater witch on that case I have not only to do the array from scratch but after waiting days again to copy all the data from the Backup to the new array. End of the day, if yours hot storage is not greater than 10TB is much better and simple SSDs as a bonus no need spend money on SSD cache.
Rebuild depends on the CPU rather than the disks. Those long rebuilds are aspects of CPU performance calculating all the blocks based on the parity data
@@iankester-haney3315 Thats why SMR disks are so famous.
hybrid makes sense if you are running anything ritualized on your NAS, editing files on your NAS, or if a lot of users are accessing the data then SSDs make sense, IF you have a network fast enough to really utilize the extra speed. But just for pure storage HDDs are really great, long life, cheap per GB, all the current HDD NAS drives today don't really eat up much power when idle. Currently the cheapest 4TB ssd is $190 a Seagate Ironworlf 4tb is $80. If you can pay for a 10TB home network, you most likely have "server closet".
10gb networking can be had for less than that 4TB seagate SSD if you don't mind buying second hand. Mellanox cards can be had for $20-$30 and old enterprise switches in some cases for under $100, some of which don't sound like a jet taking off while running. I've been running 10gb at home for a few years now and I doubt I spent more than $200 to get everything up and running.
you need to let Eddie finish when he talks.
My system consists of a RS3618XS with 7 X 10TB WD Red Pro drives and a DS 1821+ (for backup) with 5 X 16TB WD Red Pro drives, all in RAID 5. At first I utilized the 4 X 1Gbps ports on each device but found the transfer speeds to be awful. I purchased a 10GbE NIC for each NAS and disconnected the 1Gbps ports on both NAS boxes. Much improved. I installed 2 M.2 NVME 480GB SSDs in the DS1821+ and 2 standard 480GB SSDs in the RS3618XS as cache devices. Now the system just rocks. Transfer speeds between the NAS boxes are in the 600MBps to 900 MBps range. Copying files on the same volume with BTRFS format and the speeds are in the 26GBps range! One thing I noticed is that 7 drives in RAID 5 collectively yield over 1 GBps consistently.
So my take from this video is SSD for NAS (in a raid 5 config) for performance and speed... and HDD (to a secondary system using HHDs) for cold long term storage.
Even if an all SSD NAS is best for your situation you have to concede that RAID is not a backup. It doesn't matter if you use and external drive or second NAS as your backup. The best option for that backup will be hard drive(s). The speed is mostly irrelevant. In most situations you can put it somewhere you won't hear it (off site even better). Even if you live in a 500sq foot NYC studio apartment you can set it to start backing up at the same time as you alarm clock. The backup will be larger and cheaper allowing for more revisions.
I used an older desktop computer with 3 NVME SSDs, and installed True NAS on it for my home NAS about a year ago. 2 SSDs for data storage, and one as the boot drive. It works great, twice as fast as the QNAP system I was using before. And it's never crashed. I use it primarily for backing up and syncing files on my 3 other computers. I have a 1 Gbps ethernet network and I get about 90 Mbps transfer rates to/from the NAS, which is just fine for my needs. If you have an older desktop computer sitting around, this is an excellent way to put it to good use.
There's a case that can be made for both SSD and HDD and which is better one just comes down to personal needs and what one's priorities are. For the most part though the choice will still come down to price whether its based on a straight cost/TB for media storage or one's ability to buy that 12-bay NAS to cover for the 8TB cap on an SATA SSD. The more prohibitive the price tag of the NAS is going beyond 4-bays the less likely one is to deploy SSD.
FWIW my NAS all have SSD as my overall need of TBs actually isn't that great and 4TB SSDs are a great match for me but my NVR is filled with HDDs.
BTW, one thing that wasn't mentioned was size. Look at the physical size SSD solutions are in comparison to 3.5" HDD. The Synology DS620slim is a perfect example of compactness that's even easily portable.
1. Caching
2. Tiering
Would make interesting talks relating to SSDs.
3. Back up
4. Deduplication
5. Glacier storage. [To disks]
6. Glacier storage. [To tapes]
6. Off site replication.
Would also make interesting talks.
Ive got a Unraid 34tb all SSD NAS. Love it, completely silent. 95% for media, 5% for important documents/photos etc.
SSD/NVME for apps and scratch space where temp files can be written. Then, the hard drives can spin down to save power and noise.
If you get up to serving more than a family or its more than two or so video editors, then the hard drives get more useful as they just get hit more and can't spin down.
Thank you both... I think we can walk away from this knowing that both storage technologies are significant and can be used in various NAS situations which make them both very suitable for Network attached storage. Thanks again for all your efforts.
I learned the hard way how noisy large hard drives are. When my WDMyCloud stopped receiving updates, I bought a Synology NAS with 2 x 10 TB drives by Seagate. The WD was pretty quiet in my bedroom but the Seagate drives are very noisy, so much so that I had to hibernate them during the night. Noise level is probably twice that of the 2 TB WD drives I had in my old NAS.
i've heard a completely different opinion, seagate are the quietest of the bunch, i even asked on synology reddit and chat gpt, and both came out with the same result
@@NicolasSilvaVasault Entirely depends on the batch. NASCompares did a video about drive noise and the WD Res Plus were the quietest, as have many other mentioned about how quiet they are. Doubt it honestly matters, buy whichever is more available.
@@Spazza42 ended up buying those and when they're idle, is almost completely silent, but when is working hard, damn they're super loud
Thanks guys, that was helpful and interesting. I think really it depends on what's the purpose of your NAS, if it's for a home server or for a big business server where cost per TB is important and noise is not an issue as the servers will be in storage rooms/centers. I think for a home server it's best to have a mix of the 2 - nvme ssd for quick access data and HDDs for large data that you don't touch too often and can just stay there because HDDs are way cheaper and bigger capacity, and you can easily offload your junk from the SSDs and clear up space for your current file needs.
This video couldn't have came a better time for me. I'm just about to "build" a NAS for my companys webapplication for storage and I needed a few more ideas why we should buy a Synology NAS with SSD caching and not just hard drives. Spot on!
In my 10 years old PC, i have a M.2 SSD capped to 1 GB/s because of the mobo (Gen 2 slot) and with a smallish buffer, and the speed drops to about 100 MB/s when the cache is full.
I also have an array of 8 HDDs (EXOS 8 TB SATA) in RAID 5 able to *sustain* 1.6 GB/s reading or writing without using any cache. The max speed i measured is a bit less than 6 GB/s staying in the cache (Gen3 x8 controller so theorical limit is about 8 GB/s).
Having SSDs in a NAS can work if you don't have a bottleneck somewhere... For caching, better use a ramdisk.
How are you able to have sustained write speeds of 1.6GB to SATA drives that are limited to 600-ish MB/s r/w?
@Reanimatedyt Each drive uses its own Sata port. 8 combined channels is about 4.8Gbps. A benefit of using HBA controllers is the removal of the chipset to CPU communications.
@@iankester-haney3315 so utilizing the drives in unison to read/write data? Interesting I wasn’t aware of that.
@@Reanimatedyt It's a RAID 5 array. Each drive can sustain 250 MB/s. A RAID 0 could be even faster but wouldn't have any fault tolerance.
iv got 6 2.5in 250gb ssd's as a read right pool i managed to burn through 1 set in little over year but this is me righting the whole drives worth 2-3times full per week ish
This is like watching the Smith Brothers of NAS's. Great video because I just added my first M2 NVME to my QNAP, in a separate storage pool, for faster VM's; I'm all ears.
4:30 Robbie, using your idea of a hybrid storage would that not mean that on a 4 bay NAS, you would not be able to use Raid 5 or SHR ?
The NAS with the HDD's should always be stored in the guest bedroom. Guaranteed that your in-laws will never stay more than 2 nights.
You'd need a NAS system or build with hybrid storage in mind. Many units have multiple bay types or nvme slots to accommodate. TrueNAS Scale can also natively use ZFS to tier the storage. Using various cache arrangements.
Thanks for starting a discussion of hybrid & tiered storage - this approach makes a lot of sense and is an ideal best of both worlds approach of course, and it's of considerable interest to me.
I've been searching your website for this and so far, I have not found further discussion on this as a stand-alone video (or series). Part of the challenge for me so far in exploring this is that vendors seem not to be very upfront about the degree to which their products support true tiered storage (as opposed to caching) in particular (they may consider this too technical perhaps)
If you haven't already, please consider creating a video or series of videos that discusses more about some of the current product options particularly in the entry and middle level ranges for tiered hybrid storage NAS or DAS. If you have already done a video which covers this recently or know of someone who has, it would be most appreciated if you would please post a pointer - thanks!
As I'm a fan of SSDs for noise reasons and my storage needs are (relatively) modest, finding a solution that supports a several types of RAID over mix of SATA SSDs and NVME SSDs would be particularly interesting.
What a time to be alive!
We use a lot of NAS-machines, harddrives and SSD's. In various forms and sizes.
The cost per TB (or GB, whatever floats your boat) for SSD's are indeed quite steep, especially the enterprise versions.
The technology behind flash memory is quite old (Invented in the 1980s by Japanese engineer Masuoka Fujio at Toshiba) but has not really improved that much, (compared to other storage media) other than cell-density per chip. (which is not always a good thing). What has significantly been improved are the controller-chips that drive those flash chips. But the technology behind flash-chips has not really improved much. There is one thing I do believe is omitted in this video, and that is heat/cooling of SSD's. The higher the speeds, the more heat is going to be dissipated from SSD's. So, cooling, e.g. forced airflow, heatsinks (in general: thermal design) is a thing with SSD's. And another thing omitted in this video, is that a SSD can wear-out very quickly if not careful about your application(s).
We have had SSD's wear-out to 83% within a year(!) due to the application not being controlled it was using a SSD.
In general I would say, SSD's can be a benefit in certain applications but they won't always be able to replace mechanical HDD's at the current state of technology. Unless you indeed take a second mortgage on the house and go for enterprise-level SSD's...
There is still a long way to go before SSD's can replace mechanical HDD's at more affordable prices (and capacities).
It won't be very long at all. QLC has already made a pretty compelling step forward for SSD Data Drives to replace HDDs. Capacity-wise, there is still a price premium on SSDs, but they beat out HDDs in all other respects, to include lifespan. With PLC in the works, and further NAND cost reductions an inevitability, HDDs are rapidly reaching obsolescence.
@@redslate I personally believe there is still a long way to go, before (other than enterprise-level SSD's, i.e. extremely expensive) SSD's can truly replace HDD's. Our metrics, in our environment, have shown that HDD's (Red's, Red Pro's and Gold) easily outlive SSD's. We kept several NAS running where the HDD's have been on for over 10 years and no signs of any deterioration. Whilst the density-increase can indeed make SSD's more & more appealing, it does affect their life-span (and other metrics). The wear-and-tear on a SSD can come all of a sudden, and in our experiences, not much time left when the alarm-bells start ringing with SSD's. (we call it sudden-death syndrome, a well known issue with SSD's). With HDD's, when the alarm-bells do go off for issues with said HDD, you have more ample time to react to the detected issue and resolve it. QLC helps to move SSD's forward in these arena, but also implies, at times, slower speeds (compared to TLC) and lower durability. Unless intelligent software/firmware is used. (required for steady-state situations, where QLC is really struggling in).
We also use storage-solutions where the SSD's are directly controlled, instead of an onboard controller, and there you do benefit fully in almost every sense of the SSD's technology. (speed, durability and recoverabilities etc) But that is by no means a NAS and hugely expensive.
I learned something today .. Did not know a SSD you could lose data if its not in use over time
Magnetic storage can also loose data over a long time (decades).
I have a optane here six months without power I will try more six months to see if loose something
@@Vidal6x6from what I understand it’s closer to 10yrs for SSDs losing their bits while unpowered. Not an expert though, so do your own research.
Oouch! Good you now know... I learned the hard way. 😅
It likely takes *years* under ideal conditions. Consumer SSDs are rated for a _minimum_ of one year unpowered data retention, so all the rumor/conjecture about having to boot your machine monthly is really quite silly.
HDDs lose data over time too. Nothing is 'permanent.' Everything fades eventually.
To anwser whats inside of my home nas (not corporate storage):
- hot storage - based on samsung 8tb sata drives x12 - giving me a healthy storage plate for anything that I need and place to backup to with decent performance.
- cold storage - based on WD enterprise 20tb x 16 drives (yes those are loud as hell) giving me years of backup dumping ground, slow but why do I care if shifting backup from hot to cold is a background task that is a linear copy leveraging spinning rust strong point.
I go hybrid: my 16 bays are 10 rust, 6 flash. they are completely separate zfs pools; the ssds are served as iscsi / VM storage / git repos while the rust is meant for media / Plex. plus using zfs you can leverage NVMe to accelerate the pools in a variety of ways. fully committing to flash or rust has too many compromises
Good one. Beside the fact a rebuild is much faster with SSD (and scans for parity drives) you missed the point the SSD longevity doesn't depend on the TLC (QLC or else) itself alone but more on the controller chip, which is said to die much earlier than TLCs. So SSD die from one second to another without warning because the controller chip just dies. What about that? Until now I am very mixed up to decide which way to go. Maybe Enterprise/NAS SSDs do better?🤔
Keep backups, and use RAID for redundancy, then an SSD controller dying suddenly wouldn't be an issue since you can rebuild the array or restore from backup, and warranty would replace/refund the failed drive.
@@arthur78 Yeah backups are a no brainer. Using ZFS pools in server environment does not need a RAID configuration to have that redundnacy. Thank you.
We should not forget SAS/SSD´s, internet is full of 2nd hand 3.84TB ssd´s from enterprise storage solutions (3-Par, Primera, NetApp, PowerEdge etc.), they draw 3.9W each and most often have 98-99% life left when you buy them cheap. Reformat to 512 sectors and voila! It would be very interesting to have more SAS/SATA 2.5" only NAS reviews.
Really helpful discussion guys. Asking around with your average I.T. people they will usually give you some sort of poor answer but still act confident about it anyways. I'm about to now buy 2 HDDs for my new NAS...
My regular NAS has 4 spinning disks. I wouldn't put in SSDs unless the NAS unit was designed for it. For instance utilizing ZFS for tiered storage or segmenting shares based on usage characteristics.
Last year Amazon had the 4TB WD Red SSDs for $150 each. Picked up 6 of them for my 1821+. Also picked up an Intel D3 S4610 to record surveillance footage on. Huge speed difference and SSS runs much better. Also, after swapping fans with noctuas, the 1821+ is nearly silent now (apart from PSU) so could put it in the living room without noise bothering anyone.
I saw that, and I picked up a single D3 3.8TB to do some validation testing. By the time I got to it and thought yeah, this will be great, the price shot up from ~$180 to about 300 😭
Still waiting for cheap enterprise SATA SSDs. May never happen like that again
😞
Is tape back ups still a thing?
That could be used for cheap long term backup but it is very slow to get the data back.
There are like the Fujifilm LTO-8 Ultrium Data Cartridge LTO8 which are tape inside - 12 TB. I don't know a lot about them, just that they exist. Edit - the hardware looks pretty pricey for individuals though.
Exactly what he said! Tapes are still very much 'a thing', but prohibitively expensive
Tape is _still_ the leading long-term storage method. The main expense is the equipment needed to R/W. Tape itself is relatively inexpensive.
@@nascompares can you do a further info vid about what your mate said about nvme loses data if not plugged in, thats really really worrying
Will transission to ssd this year. Mostly becase harddrives dies on you all the time, but good brand ssds last practically forever, and are less prone to failure from movments and temperature. Aside from all the other positives like power, sound, speed etc..also migrated from nas to computer with unraid at the same time since nases and network has become too expensive at the same time as they are too limited in performance and expandability. Bonus is that streaming has become so good now day that you can make a vm and remote from anywhere and get lagg free dektop performance on any device.
Agree 100%. If you are a company or an advanced user you can deal with redundancy, raids etc. For the "98%" of consumers a SSD nas is literally "set and forget". No disk based system can do that for you. Your data is there and safe when you need it once in a moon - as your main hard drive is SSD anyway.
Heh guys, nice video! Regardless of what you said I think there is insufficient attention given to SSD reliability. We have many years of large scale reliability data about hard drives, but very little reliability data about SSDs. i’m not taking a position here. I just think it’s a concern.
Could you explain more what you mean @4:55?
thanks, lovely discussion. I think it is all about the use case... if you are a crazy video editor 8k etc... well.. ssd... for me it is not the case... my nas will be used as a glacier solution for family photos, videos + few local small VM + few containers... for that i find having large HDDs still being the best case ONLY if they are proxied by few SSD (mirrored) to catch the "heat" of the moment when people(aka familiy) are throwing their videos/photos in panic mode... and then slowly redistribute it to the HDDs in the backend... then spin them down... with the prices of ssd going down i will most likely move a tier of SSD up and will go with 2-4 ssd with up to 4 tb per drive... plus bunch (4-6 HDD) in the backend for mass storage.
Not even 8k... I've been editing 4k60 for the better part of a decade. You *need* solid state memory, and not just for editing. Dumping data efficiently is a time-consuming process that saturates SATA (and HDDs can't sustain fast write speeds for large transfers). Even 1080p60 benefits greatly from SSDs.
I’ve learned so much from this channel after accidentally supporting the UGREEN NAS on kickstarter. So stoked I actually got it, even more stoked that this channel exists to educate dummies like me
I kinda see the arguments regarding the noise of a NAS equipped with loud hard drives if the NAS is on the desk next to one's workstation but that is a worst case scenario. Modern Pro NAS drivers are rather quiet. I have a 10 gigabit network at home so my 10 gigabit equipped NAS can be 15 ft away from my workstation in an open rack (buy a closed rack is noise is a concern). It is a 6 bay with 7200 rpm Iron Wolf Pro drives (and is paired to an identical unit for backup). Noise is not an issue. I ran my previous NAS and NAS backup units under my desk and again, noise was never an issue. Speed is not an issue because I am using 6 drives and I see peak performance of around 700 MB/s. If I need something faster than that for work files then I'll use my workstations internal M.2 storage and then bulk copy the work over to the NAS once I am done. This video should have been full of benchmarks for noise, performance, etc. As there were none this is a lost opportunity. The author rarely benchmarks anything beyond Plex so who is he to comment so stridently regarding noise. I've never had an issue with NAS specific drives when it comes to noise and I've been them since the first Western Digital Red drives came on the market. I recommend the author setup a 6 bay NAS, equip it with 6 large Iron Wolf Pro 7200 rpm drives, attach it to a 10 gigabit network and then go to town on the performance and audio level testing.
I have 28TB of disk. 24TB is is HD and 4TB is flash. I use HD for long term. I can max out the HD interface on all 3 HD drives. I have 4TB of SSD. The life span of the SSD is 5 years, my HD's have a 5-year recovery and over 5 plus year warranty. My drives are quite even when being used and only issue is heat. RC
the guy in black, is he wearing a rosary bracelet on his wrist ? It so, a big thumbs up!!
I've loved my Synology, but it's finally moving 'off-site' to just serve as a backup target.
The NVME ssd's did help with the docker containers I had running on the NAS, but did hardly anything for my workflow. 1Gb ethernet just doesn't cut it for 4k video editting (raw footage on the NAS, proxies local).
I've tested making an NVME SSD share on a linux server with 10Gb ethernet and it's entirely possible to store proxies on the NAS too.
So the Synology is going to go be a backup target and an SSD+10Gb NAS is currently being sourced (main issue is finding a low power CPU + mainboard that supports bifurcation on the x16 slot to 4 times x4 nvme.
Show me a NAS with 10 manageable bays. Otherwise I am using 8 x SATA ports on a mini itx in addition to USB JBOD disc enclosures attached for archival long term storage.
I have two NAS system one is full SSD and the other is HDD mostly for backup.
I wanted some data for storage. Like SHTF storage. Medical, engineering, agriculture, type stuff for when there is no more food, water, clothing or shelter but no doctors, farmers, craftsmen or the electricity to run all the equipment for those people. Started a NAS. Got excided and bout some good 6G HDD that the NAS does not except along with some SSDs that work either. Did more research and found that the mean failure for both are no way long enough. I now see how ancient societies could have lost everything.
Have a synology 918+ with the 5-bay expansion and roughly 50tb.
Also have an older desktop running proxmox with icydock bays and loaded up with SATA ssd drives to ack as my local backup for critical files.
They're both connected to a managed Cisco switch via lacp.
Thinking about a PCIE Xpress 4.0 x16 to 4 m2 adaptercard to deploy 4 x1600 Optane 118GB drives for cache. But then what do I need this speed for ? Sitting on 5 disks in raid 5 and the bottleneck is the 2.5gbit network.
Only points worth mentioning that SSDs are worth going for are electric power savings and noise levels.
Edit: gave this a second thought - if I might run into the scenario where I need high random read/write this might be the way to go
Warning: the cards that offer pcie to 4 m2 slots need the mainboard to support bifurcation and here specifically 4x 4x 4x 4x
I use the gigabyte mc12-le0 that supports it (to give you an example)
All very true, but don't overlook large scale VM deployment or large high volume and frequency dayabases
Hard drives are really the only option if you want to have a capacity that makes a NAS worth having. Plus, SSD's will be kneecapped by the network in most homes. I have four 12 TB WD Red drives in my NAS (will be going with Seagate Iron Wolf Pro drives in the future), and when I first got the NAS, the noise drove me fucking bonkers. I got these 4 roughly inch thick foam pads to set the NAS on just to keep it from being able to vibrate the desk it sits on and so on. But I've been running it for four years now, and I don't know if's it's just me having gotten used to it or that when I first had it it was still doing a lot of setup work, like Plex doing its thing etc., but I rarely hear it now.
I only really hear the drives work hard when there's a S.M.A.R.T. test going or if as I did recently I do a complete reorganization of my media and Plex decides it wants to rescan everything, so the drives annoyed me for a solid day.
I have the Iron Wolf Drives in my Synology DS220+. Even though I'm hard of hearing, I still find them obnoxiously loud.
It's going to depend on your storage requirements; however, the market is currently shifting. I've build my NAS based on FreeBSD/ZFS (RAID10) and 2.5" 7200rpm drives. It's been running well over decade and the pool is very capable of saturating a 10Gbit NIC with all the drives. Hard drives are no slouches when pooled together and having a SSD ZIL device for writing helps too. I wanted it to be lower power thus the 2.5" drives and over the years I would add drives as I needed more space and/or replace the failed drives. I ended up with an 18 mirror setup or 36 drives in the pool. For the past few years the replacement drives have been SSD as 2.5" 7200rpm hard drives are not produced anymore and/or of poor quality. This year I noticed another trend; NVMe drives are cheaper than 2.5" SSDs. I'm in the process of retiring my entire array and replacing with an all NVMe setup. My storage requirements is not that large as still running some 250GB drives that are over a decade old -- I've had more newer drives fail than older ones. So five 4TB sticks will do. However, I also maintain another system setup with 3.5" Hard drives for taking backups on a regular bases -- a must have and hard drives are still the best solution for backups. The SSD market is currently unstable as far as interfaces go; SATA days are numbered and NVMe is a poor interface for storage as it is not hot swappable. My Mac and the new AMD motherboard for the build has zero SATA ports. The enterprise market has some very interesting (and expensive) SSD storage technology as NVMe is no go in the enterprise.
If you have a NAS for say mainly a media server like Plex many of those files are written once and that's its so SSD durability can be pointless in those circumstance however if you use an SSD as a HDD cache then you want durability.
yeah most people overdo the durability issue, most people will NEVER kill an SSD in a NAS or even be close to it... since they write data and never really delete it, they just read it many times... if you use it as a editing buffer when editing 4-8K video then sure you NEED durability but for most ppl you don't
Of they are using caching (write caching is active in every write on this system when uploading) the durability might be hit harder than you think
In DS218+ I have two Intel D3-S4510 SSDs. Copy to HDD. Wordpress after switching from HDD to SSD is impressively fast
One thing I had noticed on storage forums and reviews is that SSD's tend to fail without much or any warning. Usually a controller failure. Compared to HDDs, which more often seem to give some warning signs as it's a mechanical device. Of course I've seen HDD's just outright fail too but seems a bit less common.
When is the ugreen nas review coming? Do you have a date already?
Delayed, they pushed the launch back and said improvements and updates to the software were coming, so in fairness I delayed it. The Q&A is being recorded this week though
thanks a lot@@nascompares !
Besides the software what do you think of the hardware? especially for the kickstarter price I am really impressed with the package.
Also any ideas if one will be locked in to their software? or is truenas or another OS an option?
I have ssd for caching and two 14tb hdd for storage in my old dell r210
I have two 1TB NVMe SSDs for read/write cache RAID 1, they make it fly, my spinners are two HGST 250MB cache CMR 7200 RPM 8TB helium drives in RAID 1, it's all in an Asustor AS6702T Gen 2 this little NAS gets up and goes, oh it has 16GB of RAM as well.
Power consumption is an issue, particularly in the UK, where we are at the mercy of the energy companies and the government in their pockets. Also, how often is a nas accessed, particularly in the dead of night. Another important factor is the type of data you are storing. Many people get hooked up on raid this, raid that, zfs etc etc. SSD's may not be as well suited to a parity or other raid sysyem because of the write overhead, but based on what data you have, do you even need raid? There is the saying raid is, not a backup, so you backup important data and use something like mergerfs to store less important stuff. If you're fortunate enough, have a tiered sysyem, one on-demand server with spinning rust for your vast arrays, and a low power all ssd sysyem for most access. Sync between the two. This way, you already have 2 copies of your data, coupled with an off-site backup of the most important data. I find it difficult not just from an energy standpoint but from a common sense point of view to justify running a 24 or 48 bay nas 24/7 for occasional access. There is an argument for a server that does multiple duties like pfsense, unifi, but those things will run happily on a low powered ssd rig.
damn that was intense. well done, i definitely just got a crash course. so if my main concern is playing 4k HDR movies on my NAS (file size around 24gb), will a system with hard disk drives suffice? right now i have plex set up with ubuntu on a raspberry pi 5 and it can't play such a file without freezing every 20 seconds to catch up.
If I wanted to migrate to SSD in my 4 bay u it, could I start by simply swapping out 2 hard drives with 2 SSD’s ?? And eventually get to all 4 SSD’s . I am configured with Synology RAID
Just finished the 4th iteration of my server/NAS#3. Last two HDDs have been evicted, 14 consumer SSDs, 2 u.2 intel nvme drives. 62TB raw, ~50TB usable and I don't regret it. I do have a 5 bay HDD NAS by definition 36TB usable, powered down most of the month, comes up just for syncing and a scrub. I hate noise and I love watching my 10gbe network get saturated. PS - ASUS screwed up the design of their all m.2 NAS. That should have had either a PLX or Asmedia PCIe lane switch in it.
SSD price have gone up loads, I stuck a 4Tb Samsung drive in my server mid last year to replace a 4Tb spinner that had started chucking out SMART errors,cost me £163, is now £240...
Is that one of the Samsung drives with the bad firmware ?
I bought a couple Crucial 2TB SSDs last summer on sale for $59 each. You’d be lucky to find them on sale for twice price now. I kick myself every time I think about it for not buying a couple more.
@@DavidM2002 no, 4tb SATA drive, being used as a cache drive on Stablebit Drivepool.
@@tonycosta3302 I wish I had bought 2 of the 4tb ones now but nevermind...
I see you took my advice.
Mech with seagulls
" I hate seagulls "
Love it
Tbh these are just budget hoodies and custom text. No plans to sell them. That said, you were right they would be popular, as they get regular shout outs..more than I do!!!
I had an Intel NUC 8 as my TV PC and a mini-tower PC (mATX mobo) as my Plex server. The NUC had a NVME 1GB SSD and the Plex server had three 6TB 3.5" HDD's with a NVME SSD for the OS. The HDD's cost about $150 each when I bought them in 2020 and 2021
The NUC was on the TV stand and the HDD based Plex server PC was a few feet away in a semi-closed cabinet.
I let the PC with the HDD's run 24/7. If I wasn't watching TV or a movie I could hear the HDD's. I ended up turning it off when not actively using it.
I made a combined TV PC and Plex server in a Cooler Master Elite 110 SFF case with a Mini-ITX mobo and five 4TB SATA SSD's. It is small enough and quiet enough to place it on the TV stand. It has five low noise fans in it. I only hear the fans when it's running software like Handbrake. When I bought the 4TB SATA SSD's in 2022 and 2023 the average price I paid was around $200 each. Silence is sometimes worth the extra price!
I feel like if I had plenty of money, I'd have an Asustor or similar in my office, and an HDD solution in another room that I don't sleep in (if I needed that much space and it added up to less than just using bigger SSDs). I don't want the noise, portability is of minimal use to me but it is easier to move it around the office and won't cause it any bother, and it uses less power.
Realistically, I have a WD HDD based home NAS and won't be changing it for years unless it completely dies on me.
Seems HD is really only they better choice if you need over 8TB of storage.
Bingo. Or if you're penny pinching.
Why not make SSD from 2.5" size to 3.5" size? To make it have a bigger size, more space, and maybe more than 8TB?
Speaking of durability on SSDs, the issue for me in the homelab has always been Power Loss Protection. 99% of SSDs available don't have it. Those that do have it are either hard to find and/or used enterprise gear, or are in exotic form factors like U.2, PCIe, etc. There are very few M.2 and many motherboards don't support the 22110 size.
Because of this, I've had in mind to buy DRAM cacheless SSDs so that's not an issue. Sure, you get lower speed, but your network is likely not going to be able to handle those speeds anyway.
Those are cheaper, and you don't have to worry about power loss.
The main issue I see is the limit of size of SSDs. It means that I can't get that big of an array to store all the data. We need to see at least 10TB SSDs at a reasonable price before I could ever consider them.
QNAP TBS-h574TX has an SSD nas with CORE I3/I5 so no cpu cotterlneck.
I have had better experience using SSDs in my NAS.
1. Faster
2. More reliable - I had 2 HHDs fail over a 7 year period, the first after 3 years. I have had SSDs running for 5 or 6 years and no failures yet.
3. Quiet. HDDs generate more heat, and thus my NAS fan ran at higher RPM. With SSDs I never hear the fan. Also, with HDDs I could hear them spin up and hear subtle vibrations. SSDs are completely quiet.
True, true!
The HDDs noise...Why Synology runs OS on HDDs, it buzes the first drive all the time. Can i set OS to primarily run only on SSD in 2nd bay. For example I have a nas for backup, does backup and keeps HDDs on untill shutdown, no apps installed, hibernation set to 1h, if shorter it cycles THE drive, so noughty Clean DSM install, no errors, HDD hibernation is useless in this case for main drive.
For boot, you need ssd, and for cache you need another ssd, the rest are spinning. Unless you have nvme slots on motherboard, then you can add to the storage.
i dont see any comment here regarding maintenance costing is replacing dead drives,
im setting up my first new nas and this is the question im asking myself, is if i buy all HDD, in 5-10 years, it will be suspected that i will need to replace all drives, add to that that i will likely have this array until i die, that could be 50 years of upkeep replacing everything atleast 5 times
now compared to using long term archive like storage, what are the maintenance costs of full ssd? will i get 20 years from a drive? 5 years?
Very interesting… I think the big selling point for SSD based mass storage will be in the domestic environment. Audio / Video buffs will really appreciate the silence of SSD’s and a “passive” NAS in the room being used for entertainment… just my tuppence worth… 👍
I don't really get the lure of putting SSDs in a NAS when a five bay NAS with HDDs will hit transfer speeds that can choke a 5GbE link never mind anyone still on 1/2.5GbE. I'm not saying it won't benefit some people but most people aren't spending thousands on their NAS setup so the benefit of SSDs is still very niche (at least in the home space).
I personally like the sound of HDDs in a home scenario, BUT I’m only using 4x 6TB WD REDs.
I just wondering. In a qnap 4 bay drives, can we do 3 drives (raid 0 for speed) and the other 1 (for back up the other 3 that raid 0 a.k.a raid 1) just saying like 3 4Tb drives and 1 12Tb drive
Can the synology 2413+ use larger capacity drives than 4TB?
Pretty sure if you get a 66TB SSD NAS in the size of the 4 bay he's pointing at, you're gonna get quite a lot of noise out of that too, simply because of the cooling required.
durability: I have 3x8tb qvo drives in raid5/shr, spec is 2880tbw each. I write less than 250gb/day (from smart) - that works out at 31 years life: I don't worry about durability, but I love the silence.
can get 8TB on 2.5" SATA SSD's for a while now... so that is still around the size most people actually buy HDD sizes in and they aren't that pricey
8TB are (mostly) QLC NAND SSDs. The few exceptions have comical price-per-TB price point.
Always HD or only for backup!
Now is not really the time, the time was 4 months ago, prices going up, wait 6 months at least. Only other option is not fully populating right now.
Talking about asustor flashtor 12.
Prime Day and Black Friday/Cyber Monday had some incredible SSD deals.
4TB Samsung 870 EVO $169.99
8TB Samsung 870 QVO $319.99
2TB Samsung 980 PRO $99.99
2TB Samsung 990 PRO $129.99
4TB Samsung 990 PRO $249.99
@@redslate since then Samsung and other nand producers have cut production in order to drive prices up. 😕
Can you discuss the issue most ppl have - I have a bunch of HDDs from multiple 1GB up to one 10GB ... should I move all the data from the "small" disk to the largest disk ?
I think my only criticism about the SATA SSD being limmited to realistically 2-4TB is that it ignores the kinda people who are looking for a NAS not as a horde anything and everything vault, but looking at it as a convenient way to share files between multiple machines.
You give my parents 20TB they will never fill it. - They haven't even gotten close to filling the 500GB SSDs they have in their PCs. - But a NAS would be helpful in letting my fathers music collection be instantly accessible from any machine in the house, or as a backup for all my mother's photos (that again, hasn't even come close to filling up her machine.)
Realistically if I were to get them a NAS, I'd probably get them a 2 bay NAS, and assuming I have the fixed budget for disks, I'd honestly debate using SSDs, since again the storage capacity wouldn't be a factor, but noise could be.
Nothing about the power consumption of ssd vs hdd in an 24/7 system like a NAS. I have a 2 bay QNAP with 2x1TB WS RED NAS SSD only for the power consumption reasons.
Tbh, you are right, we should have mentioned it. It's an obvious one, but also we shouldn't assume everyone knows that!! Thanks for the feedback man
Bigger HD, more noise, man was I disappointed with the new WD purple 12TB.
This disk resonates soooooh much, that I can hear it humming on the next floor of our house. The Synology NVR1622 I installed it in is placed on a shelve that is mounted on a wall in the room just below our bedroom and the wife is complaining about that disk since she hears it as soon as she puts her head on the pillow.
I get some vibration from my seagate exos, but thinking of putting a piece of cork or something under it to see if that helps.
It's actually quite funny how people keep fixating about having faster storage in their NAS, spending thousands of pounds / dollars - and then just plug it to the network with 2.5Gbe network ...
Or enabling smb3 multichannel and using both of the 1Gbe NICs, in my case 2,5Gbe NIC integrated into z490 motherboard have some stability problems at 2,5 mode, smb3 multi works also with usb NIC
Its all about the speed SSD's and Nvme are great "BUT" your nas needs a minimum of a 5gbe port or 10gbe to bother. The newer standard of 2.5gbe is still old mech HDD speeds so i still call out to all NAS manufactures that 5 or 10gbe eithernet is the MINUMUM dont sell products advertising SSD;'s or NVME slots when they come with 2.5gbe or even 1 gbe ports.
Can i have 4 2.5 inch ssd installed in the ugreen nas 6 bay without any issues?
Interesting point about data-timespans; I just revived a couple of workstations that haven't been plugged in since 2012 and 2013. ==> If there were SSDz involved hmmmmmm probably wouldn't have worked ??
im barely a minute in and all i gotta say is as a scavenger i don't care as long as the space is right. my nas is made up of old hdds and ssds and using btrfs it doesn't matter to me, i've already written off speed.
I'd love to change to ssd or nvme, but the size isn't there for what I've got, and if it is, the price is just not worth it.
I am going to get some ssds just for the randoms
There was some misinformation about how long the data will stay in SSD vs. HDD if left unpowered. HDDs will not hold data for 50 years like claimed in the video. The situation with HDDs is similar to SSDs where the data will have to be refreshed every couple of years to keep it intact
"bigger is always better" "wow" lol amazing.
IMHO, in HOMElab, nas ssds are not for speed, but for power efficiency and noise. Yes u can park hdds, but its much more tricky.
So frustrating that ssd prices went up. I bought 8Tb samsung for 320€ last year and 2Tb nvme for 69€.
And now "progress" brought us to 50+€ per Tb again :facepalm:
Toshiba 4TB Enterprise Hard Drive $59. WD 4TB NAS SSD $299. My NAS has 5 drives in it. That's $300 in HDs or $1500 in SSDs. If I was running a business with a lot of people hammering the NAS and needed the speed, I could justify it. But for home use that would be a big NOPE.
SSD's Way longer life, Faster, less power, Less noise, cheap-ish if you use the correct drives.
@@raya4633 As far as SSD's almost any drive will be better then a disk based NAS drive. The only reason in the first place to have a NAS disk is because of the harsh ware from reading and writing over and over. The second reason is NAS disks are better for simultaneous writes which SSD's do no need because all areas of the disked can be accessed at any moment and simultaneously. So the NAS SSD's are kind of dumb marketing that makes no sense other then higher quality silicon. Also for now putting anything high then a gen 3 SSD in a NAS is a waste. Because of processer speeds and lane caps. Example the Asustor in this video should only be used with low tier m.2 because the lanes are capped. You can get some gen 3's on the cheap with 4 TB capacity.
2 min 10 Seconds..... NO no, ssd's are going UP, up, up, Nand is getting very $$$, its been going Up , not down.. I am using 2x WD 18Tb Red Pro and 2x 1Tb SN850x's
Tbh now SSDs are diverse in their interface/profile, pricing has become better.
personally i stick to drives because in a array and the whole system limited to only 1gbit 5 drives are fast enough. hdd making around 150mb to 300mb in a sec so 3 to 5 drives are fast enough if you consider the file transfer overhead of up to 10%. ok ok i see drives have one big problem this is response time and iops but for personal use this does not really matter. i use a 5 array with one parity. i use my server over the network with my phone or my laptop. i also have other services running on my server besides a nas they all accessing the drives constantly. my personal home internet connection is only 100mbit so there is no advantage having a ssd on my home network there is only a 1gbit switch and my router is also 1gbit but can handle intern only 300mbits but this does not matter i dont use the internal switch. also my drives spinning constantly now over 7 years i had only one failure and this was not the disk array it was the ssd i use for my os on the server. but i always have a 1 to 1 copy of that drive in case something goes wrong and so a had only change the drive and reboot again and every thing was back online.
I went for a different hybrid solution : my old DS2015XS+ fully loaded with 16TB loud cheap WDs in the garage for backup and big data I do not need dayly. It is off 6/7 days only on for the weekly backup session. Then a DS1522+ +10Gb NIC fully loaded with 4TB Samsung SSDs which were €180 half year ago for 24/7 data , media, etc. Both run SHR configs 1 disk redundancy.
I am shopping for silence. Can't take the noise any more. Budget looks to be $2K US to get some silence too LOL.
so were are all the 24TB SSD? seems SSD's can be made in this size if they make them the size of a HDD but that would mean killing all HDD manufacturing so why would they do that.