The easiest way to cool the drives in the middle of the chassis is probably to mount two or four large fans (120-140+ mm) on top of them, blowing down. You seem to have enough clearance for them, and it shouldn't be too hard to attach them there. That would drop the temps quite a lot. Installing one or two fans at the standard exhaust locations on the back of the chassis should help with the airflow as well.
Thank you, this was interesting and very informative. Thanks to this video, I became aware of the difference between SATA and SAS, and between SFF and SFP. Plus your style of clear and thorough explanation is refreshing, and through it I get the experience of all this exotic hardware. Very nice.
@@HomeSysAdmin you are very close to 1 petabyte, I don't care what anybody says, but that's quite an achievement in itself. As a single guy, my home pc has 21TB over nine disks, 2 are nvme and the other seven are samsung flash drives, sata6 & USB3. CPU is ryzen 9 5900x, GPU is 16GB Radeon VII and 32GB system ram, motherboard is an ASUS ROG Crosshair VIII. It's been a couple of years since I've upgraded the pc. So I reckon it's about average or maybe slightly above average in the overall score of things? 😂 Anyway, cheers from Texas, bud. I truly enjoy your videos both on the PV and PC side of tech gear.
@@daveyd0071 I have one machine that will take 72 drives FACTORY STOCK - and they're all reachable hot-swap drives that you don't have to pull the machine from the rack to get to. Supermicro SSG-6048 series. Anything I've seen that can hold more drives have been "vertical load" type as pioneered by BackBlaze. Those can exceed 100 drives per 4u space, on EXTREMELY long cases that don't fit in many racks. BTW - your "home" system has a LOT more SSD than most, way above the average even today. As far as big Chia farms go - I've seen multi-machine farms reporting to FlexPool that exceed 15 PETAbytes, though most of those are "only" around 600 TB per machine.
Just landed on this page, enjoyed the video, although CSE847 (jbod 45) would be a better choice as this has 7 fans and 2x 1400w PSU. Anyway, your heating issue on the 16 drives is due to your placement, your blocking the airflow, turn the disks by 90 degrees and you will have a better airflow.
Uh, coming soon... check back in a week. LOL. And yes, I realize sideways was not optimal for airflow but it's the only way they fit with the backplanes.
Cool video! Any plan on putting two exhaust fans on the back of the enclosure? May help pull some of that hot air out from the rear immediately so the rear drives aren't getting soaked in that heat. Might help a smidge.
No fans on the back; however, I did have to completely redesign this to improve air flow. I have all of the drives lined front to back now instead of sideways. It was a tight fit (and only fits SATA in this orientation) but it's much cooler now.
They do tend to sell out fast. I just checked with him and he said he will be listing more tonight. If you're a member of his channel, you can get access to his discord where you get an early heads-up.
Great build. Nice research. 8K for HW. Assume 'free' electricity from solar. So what is the incentive? Are you actually making $$$? Are you able to share details? Thanks for sharing
I would put rubber grommets in between the aluminum strip and the hardrives, you may run into spindle syncing and would cause issues with the hard drives. Just an fyi as I had run into this on racks of storage arrays.
I create an array of HDDs with opposing spins. So that vibrations 'may' cancel each others. If not vibrations, at least spindle start and stop momentum will cancel each other.
@@HomeSysAdmin I'm pretty sure that all the pins are the same except the 12vhpwr connector. Should be on the surf sheets from molex or the Intel ATX spec.
Great video. Do you provide consulting for a newbie that bought some Netapp Jbod? I was close to understanding after this video but cant get my head around it. Thanks
I don't do direct consultation; however, I am always looking for video ideas. What enclosure do you have and what questions specifically? Perhaps I can grab one and make a video 🙂
@@HomeSysAdmin i just bought 4 ds2246 and the issue i encounter is like what pcie card to connect to the pc and which cables. There are ton of different ones and for someone that doesn’t know much about this is hard to understand which is compatible or not. Basically if you get some netapp ds2246 and want to run as a jbod to a windows 10 pc how to do a full installation. any help woukd be great! ps. nice setup on the video wish i could understans like you
That's kindof the path I'm on now. I made a baffle out of the 1/32" cutting mat plastic (that I was using for battery insulators) and placed it over the top to force air downward and temps have not hit 40C in over 12 hours.
** only cause it is easy to overlook(assume should be the same most places) if you ever need that style of aluminum material(U, L channel etc) they also have it in larger sizes often around where they sell table tops/etc in most hardware stores at way cheaper than buying it in quantity in the metal section:)
@@HomeSysAdmin But they will fail and i dont see a easy way for you to change them. It looks like you will have to shutdown the whole server and use tools to free the drive that you want to change. That was why i asked.
@@Mysticsam86 Correct, I would have to shut the whole case down (not the server or cases upstream). They fail eventually, hopefully in 10 years if I'm lucky. Maybe 1 fails 5 years from now and ok, one shutdown to replace the drive is fine. If they're failing so frequently that it becomes cumbersome to replace drives, then there's a problem [with the quality of drives]. But yes, I do see where you're coming from.
@@UnknownMoses They're all labeled with a chassis number and drive number for identification that matches the mount points and partition labels in the operating system.
Hi there! Thanks for this great tutorial. I reorganized my farm yesterday and shucked externals and connected 12 drives via the lenovo SAS expander and been seeing a lot of stales. The drives show up fine but sometimes after boot up a couple of drives take few seconds to show up on my PC. I am using windows 11. Any tips to diagnose of there is something wrong?
No idea. I'm hopeful it will be around 2 years but impossible to predict as I'm hoarding the tokens instead of selling right away. It's pretty much gambling - but this is, and will always be, a hobby to me.
Right now, a TB mining Chia without compression is good for around 35 cents/month. With compression depends on the compression level, Gigahorse C5 level is a little less than to 50 cents/month and can still be CPU mined (or use a low-power GPU like a 1050/2050 range card).
Very cool design and I love how you make use of all that wasted space. I also give you props on the cost savings as well. With all that said I think in most cases I would pony up for higher density JBOD. Cooling (dynamically on the newer SM JBODS), hot swap ability, the ability to trigger fault lights to find disks and less failure points are just some of reasons that come to mind. I read in the comments that you said you hope your disks last for a number of years and I do as well. But honestly disks do fail all the time and as you scale out your going to see it happen more and more. Are you monitoring disk health (not just temp)?
I want to buy 9211-8i card and 03X3834 expander to fit on some Fujitsu motherboard. Is this configuration gonna give me a staggered spinup? I'm googling and reading alot and just can't figure out if the controller is doing the spinning or is it the backplane doing the spinning (which I cannot fit in my case) Also a VERY good video. I just like how you explain pretty much everything that people are probably ask for.
The drives I have internally do stagger-spin but I couldn't tell you exactly how that's happening. I was always under the impression that is handled by the backplane. They technically are connected "through" the backplane. I can try and direct connect some the next time I'm messing around with it and see what happens.
@@HomeSysAdmin I ended up ordering a card that has four 8088 ports (LSI SAS9201-16E) and also ordered four 8088 to sata breakout cables. Yes it's a little janky 'cause cables are going out from the back of the case then inside but for my small farm that's enough. Tnx for the reply and keep up the good work!
3:20 is somewhat misleading. This being a -TQ passthrough backplane will absolutely work connected with regular SATA and molex cables, provided you use SATA drives. You can literally plug it into a consumer PC motherboard, straight into the onboard SATA ports, absolutely no problem.
Yes, the TQ backplanes will work on a SATA controller with SATA drives. There are a million ways you can build things and this video/discussion is in reference to using SAS drives.
There's no reason to create a RAID volume for my particular application. I'm just using it to store Chia plot files, which if I loose any, I can simply recreate them. You could totally create RAID volumes though if you'd like for your use-cases.
@@timramich do you know what bit rot even is and why raid is can't spot it? Do you know how raid actually works at a low level ? If not if I say so it's fact no enterprise solution in 2023 used raid for that reason. They all use fs aware striping and checksums so they can spot the disk that lies. You seen the files that are damaged an can't open or random crashes from the machine. All caused by bitrot
@@HomeSysAdmin I think he is asking why you chose to farm Chia over any of the thousands of other cryptos out there. I would be interested to know as well, thank you.
@@squidboy0769 Oh. Because I think it's fun to "play" with hard drives, piles of enterprise gear, and enjoy the system administration. GPUs that are WAY WAY over-priced and consume gigantic piles of electricity are not fun at all.
GPU mining is mostly dead for small miners, unless you have VERY VERY low cost of electric and an existing farm. Making a few CENTS per day on most GPUs over electric just doesn't make sense. GPU farming profitability collapsed very quickly after ETH went to proof of work - you actually can make MORE today on an investment into hard drives for CHIA vs the same investment into GPUs for ANYTHING ELSE.
Big, but not massive. Check into the Supermicro 6047 and 6048 machines sometime - semi-widely available used, and have up to 72 caddy-based 3.5" drives (and 2 internal 2.5"). Then you have to start looking at "top loader" machines for higher drive capacity, like the Dell 7000. I don't recommend the 6047 machines if you're doing Chia, putting a GPU is a nightmare in those - the 6048 has 16-bit slots instead of 8-bit, that will fit a A2000 or the like. The 6047 is fine for the older "low compression doesn't need a GPU to farm them" levels though, like Gigahorse up to about C5.
Holy cow that's expensive. I have a buddy in the Brisbane area whom was farming for a while and know the kind of trouble he had finding hard drives for reasonable prices. I considered shipping him a few at some point but the shipping cost from US to AU is so dang expensive too...
This was really cool, until i saw that this whole thing if for chia... Just NAS it my dude, you'll have storage for life instead of playing with monopoly money
Storage for life is one idea, but considering the typical lifespan for hard drives is 5-6 years... They will be long dead by the end of my lifetime. Also, not "playing with monopoly money". This is a hobby. I do not expect to "make money" out of it. It's just for fun.
@@Inphinityproductions I had the exact SKU that caused the uptick in failure rate. It was a specific capacity (I think 1 or 2TB) back in the day and it died easily. Now, however, Seagate is rock-solid. Toshiba has been problematic for me too, so I don't use Toshibas anymore.
This is the most jank "disk shelf" I have ever seen. For $200 you can buy a netapp DS4246 shelf, slap the drives in the front, connect the back and go. These drives will most likely die an early death from heat and vibration. I run 42 WD RED Pro drives in a proper shelf and they have lasted 6-9 years so far.
Supermicro CSE-846 Chassis... ebay.us/WQACmF
HGST Drives... amzn.to/4eWfoYI
More Drives...! amzn.to/3Yffgfz
(Affiliate Links)
This is the kind of diy content I love to watch!
The easiest way to cool the drives in the middle of the chassis is probably to mount two or four large fans (120-140+ mm) on top of them, blowing down. You seem to have enough clearance for them, and it shouldn't be too hard to attach them there. That would drop the temps quite a lot. Installing one or two fans at the standard exhaust locations on the back of the chassis should help with the airflow as well.
Thank you, this was interesting and very informative. Thanks to this video, I became aware of the difference between SATA and SAS, and between SFF and SFP. Plus your style of clear and thorough explanation is refreshing, and through it I get the experience of all this exotic hardware. Very nice.
I have no idea what I just watched. But I enjoyed it.
A giant pile of hard drives lol
Wow. This video is 100 out of 10... Great work. ✨✨
Thanks!! It took a LONG time to plan film all of this, so that means a lot! 🙂
Can't wait to see you replace one specific drive after a failure. Good luck finding it!
its not hard to do if you just label them when installing them...
The drives are all labeled. I will know exactly which one is failed if one were to fail.
Off-spec drives stacked all on top of each other without any isolation or dampening. What could go wrong? :-)
FYI: Going back to different video, the font here is much better for me (personally). I like this console. Thanks for sharing
I used a different resolution here. I'll keep that in mind for next time, thanks for the feedback :)
This guy is trying to brownout his neighborhood when he turns on the disk array. 😂 Awesome build bro.
Haha, it's not nearly as crazy as some of the other builds I've been seeing around the youtubes!
@@HomeSysAdmin you are very close to 1 petabyte, I don't care what anybody says, but that's quite an achievement in itself. As a single guy, my home pc has 21TB over nine disks, 2 are nvme and the other seven are samsung flash drives, sata6 & USB3. CPU is ryzen 9 5900x, GPU is 16GB Radeon VII and 32GB system ram, motherboard is an ASUS ROG Crosshair VIII. It's been a couple of years since I've upgraded the pc. So I reckon it's about average or maybe slightly above average in the overall score of things? 😂 Anyway, cheers from Texas, bud. I truly enjoy your videos both on the PV and PC side of tech gear.
@@daveyd0071 I have one machine that will take 72 drives FACTORY STOCK - and they're all reachable hot-swap drives that you don't have to pull the machine from the rack to get to.
Supermicro SSG-6048 series.
Anything I've seen that can hold more drives have been "vertical load" type as pioneered by BackBlaze.
Those can exceed 100 drives per 4u space, on EXTREMELY long cases that don't fit in many racks.
BTW - your "home" system has a LOT more SSD than most, way above the average even today.
As far as big Chia farms go - I've seen multi-machine farms reporting to FlexPool that exceed 15 PETAbytes, though most of those are "only" around 600 TB per machine.
this is the wet dream of every data hoarder :)
Just landed on this page, enjoyed the video, although CSE847 (jbod 45) would be a better choice as this has 7 fans and 2x 1400w PSU. Anyway, your heating issue on the 16 drives is due to your placement, your blocking the airflow, turn the disks by 90 degrees and you will have a better airflow.
Uh, coming soon... check back in a week. LOL. And yes, I realize sideways was not optimal for airflow but it's the only way they fit with the backplanes.
Nice video! Loving the thinkering!
Cool video! Any plan on putting two exhaust fans on the back of the enclosure? May help pull some of that hot air out from the rear immediately so the rear drives aren't getting soaked in that heat. Might help a smidge.
No fans on the back; however, I did have to completely redesign this to improve air flow. I have all of the drives lined front to back now instead of sideways. It was a tight fit (and only fits SATA in this orientation) but it's much cooler now.
@@HomeSysAdmin good deal... 👍🏽
Nice work! Clean execution!
Thanks!
Thank you for posting this video. You have a new subscriber. Keep up the great 👍 work..
I trully enjoyed watching this my friend
Impressive build 👍
Massive video - super interesting! Thanks
Is that Digital Spaceport place always sold out? They've been since at least this video came out.
They do tend to sell out fast. I just checked with him and he said he will be listing more tonight. If you're a member of his channel, you can get access to his discord where you get an early heads-up.
@@HomeSysAdmin I have no idea what discord is. I don't follow social media much. Thank you for the info.
I was just notified that 14TB and 18TB SAS were posted, prices have increased a bit though unfortunately shop.digitalspaceport.com/
I was just notified that 60x 14TB and 40x 18TB were posted.
@@HomeSysAdmin Is it a normal expected cycle of every few weeks? Waiting for my tax return to come back 😬
Really impressive. Thanks for the video.
You should test the theory on those vibrations w/ rubbers grommets vs w/o!
SuperMicro includes rubber grommets on their SSG-6048 - but only on the internal 2.5" fixed mount drives.
Great build. Nice research. 8K for HW. Assume 'free' electricity from solar. So what is the incentive? Are you actually making $$$? Are you able to share details? Thanks for sharing
Currently about 300 USD a month
I would put rubber grommets in between the aluminum strip and the hardrives, you may run into spindle syncing and would cause issues with the hard drives. Just an fyi as I had run into this on racks of storage arrays.
This is my kinda jank, I love it
I create an array of HDDs with opposing spins. So that vibrations 'may' cancel each others. If not vibrations, at least spindle start and stop momentum will cancel each other.
Did you consider de-pinning your extensions and just moving them into the molex connectors instead of splicing the wiring?
No I hadn't, but that's an interesting idea for sure assuming the pins are the same size.
@@HomeSysAdmin I'm pretty sure that all the pins are the same except the 12vhpwr connector. Should be on the surf sheets from molex or the Intel ATX spec.
this is something id love to do but can it house 1 petabyte or more if i upgrade it?
Thanks for your videos its realy inspired me. Nice and great idea.😃
its can to be raid storage?
It could be used as raid storage with an appropriate controller.
Thank you for sharing, great video.
Great video. Do you provide consulting for a newbie that bought some Netapp Jbod? I was close to understanding after this video but cant get my head around it. Thanks
I don't do direct consultation; however, I am always looking for video ideas. What enclosure do you have and what questions specifically? Perhaps I can grab one and make a video 🙂
@@HomeSysAdmin i just bought 4 ds2246 and the issue i encounter is like what pcie card to connect to the pc and which cables. There are ton of different ones and for someone that doesn’t know much about this is hard to understand which is compatible or not.
Basically if you get some netapp ds2246 and want to run as a jbod to a windows 10 pc how to do a full installation.
any help woukd be great!
ps. nice setup on the video wish i could understans like you
Can you control your air so that it can go through the bottom up or top down using walls tunnels?
Maybe a wall from the top down pushing the air underneath more than up over and out the back. Maybe?
That's kindof the path I'm on now. I made a baffle out of the 1/32" cutting mat plastic (that I was using for battery insulators) and placed it over the top to force air downward and temps have not hit 40C in over 12 hours.
** only cause it is easy to overlook(assume should be the same most places) if you ever need that style of aluminum material(U, L channel etc) they also have it in larger sizes often around where they sell table tops/etc in most hardware stores at way cheaper than buying it in quantity in the metal section:)
How will you change harddrives?
Why would I have to change them? They're new drives and hopefully won't be failing for many years.
@@HomeSysAdmin But they will fail and i dont see a easy way for you to change them. It looks like you will have to shutdown the whole server and use tools to free the drive that you want to change. That was why i asked.
@@Mysticsam86 Correct, I would have to shut the whole case down (not the server or cases upstream). They fail eventually, hopefully in 10 years if I'm lucky. Maybe 1 fails 5 years from now and ok, one shutdown to replace the drive is fine. If they're failing so frequently that it becomes cumbersome to replace drives, then there's a problem [with the quality of drives]. But yes, I do see where you're coming from.
Didn't LTT make one that is over 1 Petabyte with 22 Terabyte drives
Nice! how many TB you have now?
Won't that be a pain to replace a drive when one goes bad?
Yes, it will be a pain. None have failed yet though :)
@@HomeSysAdmin Did you at least put stickers on each HDD with the serial number so when one goes bad you know which one it is?
@@UnknownMoses They're all labeled with a chassis number and drive number for identification that matches the mount points and partition labels in the operating system.
Hi there! Thanks for this great tutorial. I reorganized my farm yesterday and shucked externals and connected 12 drives via the lenovo SAS expander and been seeing a lot of stales. The drives show up fine but sometimes after boot up a couple of drives take few seconds to show up on my PC. I am using windows 11. Any tips to diagnose of there is something wrong?
Any ideia how long will it take to recoup your investment in this HW mining chia?
No idea. I'm hopeful it will be around 2 years but impossible to predict as I'm hoarding the tokens instead of selling right away. It's pretty much gambling - but this is, and will always be, a hobby to me.
Right now, a TB mining Chia without compression is good for around 35 cents/month.
With compression depends on the compression level, Gigahorse C5 level is a little less than to 50 cents/month and can still be CPU mined (or use a low-power GPU like a 1050/2050 range card).
@@bricefleckenstein9666 I'm currently replotting with gigahorsey C7.
What use would you have for a jbod at home?
Chia farming. Home NAS storage.
Very cool design and I love how you make use of all that wasted space. I also give you props on the cost savings as well. With all that said I think in most cases I would pony up for higher density JBOD. Cooling (dynamically on the newer SM JBODS), hot swap ability, the ability to trigger fault lights to find disks and less failure points are just some of reasons that come to mind.
I read in the comments that you said you hope your disks last for a number of years and I do as well. But honestly disks do fail all the time and as you scale out your going to see it happen more and more. Are you monitoring disk health (not just temp)?
I want to buy 9211-8i card and 03X3834 expander to fit on some Fujitsu motherboard. Is this configuration gonna give me a staggered spinup? I'm googling and reading alot and just can't figure out if the controller is doing the spinning or is it the backplane doing the spinning (which I cannot fit in my case)
Also a VERY good video. I just like how you explain pretty much everything that people are probably ask for.
The drives I have internally do stagger-spin but I couldn't tell you exactly how that's happening. I was always under the impression that is handled by the backplane. They technically are connected "through" the backplane. I can try and direct connect some the next time I'm messing around with it and see what happens.
@@HomeSysAdmin I ended up ordering a card that has four 8088 ports (LSI SAS9201-16E) and also ordered four 8088 to sata breakout cables. Yes it's a little janky 'cause cables are going out from the back of the case then inside but for my small farm that's enough. Tnx for the reply and keep up the good work!
3:20 is somewhat misleading. This being a -TQ passthrough backplane will absolutely work connected with regular SATA and molex cables, provided you use SATA drives. You can literally plug it into a consumer PC motherboard, straight into the onboard SATA ports, absolutely no problem.
Yes, the TQ backplanes will work on a SATA controller with SATA drives. There are a million ways you can build things and this video/discussion is in reference to using SAS drives.
why jbod instead of raid?
There's no reason to create a RAID volume for my particular application. I'm just using it to store Chia plot files, which if I loose any, I can simply recreate them. You could totally create RAID volumes though if you'd like for your use-cases.
@@HomeSysAdmin ok understood
Raid is fundamentally flawed, as it's not bit rotaware. As such you put drive in jbod and use a modern FS like zfs
@@damiendye6623 Uhh, if you say so
@@timramich do you know what bit rot even is and why raid is can't spot it? Do you know how raid actually works at a low level ? If not if I say so it's fact no enterprise solution in 2023 used raid for that reason. They all use fs aware striping and checksums so they can spot the disk that lies. You seen the files that are damaged an can't open or random crashes from the machine. All caused by bitrot
Curious to hear how long to recoup your investment for the work this system is performing?
Well, that depends on how well this video does on RUclips. LOL
Nice work...
如何快速安装硬盘?
You're VERY focused on Chia, would like an explanation on that. I did ETH mining when it was cool, but I just don't get chia as a cost realization.
An explanation on what? It's a disk array, you can use it for literally anything.
@@HomeSysAdmin I think he is asking why you chose to farm Chia over any of the thousands of other cryptos out there. I would be interested to know as well, thank you.
@@squidboy0769 Oh. Because I think it's fun to "play" with hard drives, piles of enterprise gear, and enjoy the system administration. GPUs that are WAY WAY over-priced and consume gigantic piles of electricity are not fun at all.
GPU mining is mostly dead for small miners, unless you have VERY VERY low cost of electric and an existing farm.
Making a few CENTS per day on most GPUs over electric just doesn't make sense.
GPU farming profitability collapsed very quickly after ETH went to proof of work - you actually can make MORE today on an investment into hard drives for CHIA vs the same investment into GPUs for ANYTHING ELSE.
And here I thought I had a Media Storage addiction problem having 60TB and growing...
I just built a 120 tb, now I know those are rookie numbers lol
@drew21t man that's so dope
Big, but not massive.
Check into the Supermicro 6047 and 6048 machines sometime - semi-widely available used, and have up to 72 caddy-based 3.5" drives (and 2 internal 2.5").
Then you have to start looking at "top loader" machines for higher drive capacity, like the Dell 7000.
I don't recommend the 6047 machines if you're doing Chia, putting a GPU is a nightmare in those - the 6048 has 16-bit slots instead of 8-bit, that will fit a A2000 or the like.
The 6047 is fine for the older "low compression doesn't need a GPU to farm them" levels though, like Gigahorse up to about C5.
wow typical america those cases are so cheap and plentiful, same one i just got was about 1000aud (700usd~)
Holy cow that's expensive. I have a buddy in the Brisbane area whom was farming for a while and know the kind of trouble he had finding hard drives for reasonable prices. I considered shipping him a few at some point but the shipping cost from US to AU is so dang expensive too...
Cool build but servicing those internal drives will be a bitch.
Yeah... it has some disadvantages for sure!
This was really cool, until i saw that this whole thing if for chia...
Just NAS it my dude, you'll have storage for life instead of playing with monopoly money
Storage for life is one idea, but considering the typical lifespan for hard drives is 5-6 years... They will be long dead by the end of my lifetime. Also, not "playing with monopoly money". This is a hobby. I do not expect to "make money" out of it. It's just for fun.
I recommend to use hddtemp and then command 'watch hddtemp /dev/sd[a-z]' ... this way you see exactly which drive has what temp.
Nice! That responds much quicker than querying smartctl for a pile of drives as well. Thanks for the tip!
friends dont let friends do seagate. They have the WORST reliability record for out of all enterprise class drives.
Maybe... but out of 100-something drives, I've only had 1 fail, which was a consumer-grade external 8TB that I RMA'd back.
This is a super old and kind of unreliable factoid, especially now. Seagate is just as reliable as any of them now.
Had best luck with Seagate. Every Toshiba I have ever bought failed
@@Inphinityproductions I had the exact SKU that caused the uptick in failure rate. It was a specific capacity (I think 1 or 2TB) back in the day and it died easily. Now, however, Seagate is rock-solid. Toshiba has been problematic for me too, so I don't use Toshibas anymore.
This is the most jank "disk shelf" I have ever seen. For $200 you can buy a netapp DS4246 shelf, slap the drives in the front, connect the back and go. These drives will most likely die an early death from heat and vibration. I run 42 WD RED Pro drives in a proper shelf and they have lasted 6-9 years so far.
I have a DS4246. There is a reason they're $200. They're terrible shelves - very loud and very power hungry. So inefficient...
Pimp my ride babyyyyyyyyyyyyyyy
guys like you makes hdd expensive