Those cheap SATA controllers can get really hot. I have a few lying around that produce read errors when overheated and ZFS wrongly assumes the problem is with the disks and kicks them out of the array. An easy solution is to use thermal adhesive to glue a heatsink on them. With disks this big, I would have gone with RAIDZ2 instead of RAIDZ. Rebuilds can take a long time and put allot of stress on disks, there's a good chance another disk fails during the rebuild. And on a last note: Label your drive trays with the serial numbers. You'll thank yourself later...
@@der8auer-enyes my dear. I proud for you try you’re best you’re branes but please make more educashin four the computer. Germany xcelent four make the car but no strongly four understanding the pc. I proud you make you’re first nas but if I am honestly you need four more study the book. So no feel so much shame four no comprehenshin my sweet. Many my country ready four teach you how become excellent computer. Never give up and never remember my beauty.
@@RanjakarPatel उस पर ज़्यादा कठोर मत बनो. वह अपना सर्वश्रेष्ठ प्रयास कर रहा है और अन्यथा वह काफी बुद्धिमान है। उन्होंने फायर बियर कंपनी को अपना ज्वालामुखी पेस्ट बेचने में मदद की, भले ही यह विचार पहली बार भारत में विकसित किया गया था।
@@RashakantBhattachana ईमानदारी से कहूं तो इसे देखना काफी कठिन है। लेकिन वह अंदर से एक अच्छा इंसान है। और वह अपनी बुद्धि और शिक्षा के स्तर के लिए गंभीर प्रयास कर रहा है। इसलिए मैं आम तौर पर इसका आनंद लेता हूं और इसके लिए उसकी सराहना करता हूं।
For the PCIe SATA card, if you won't be switching to an HBA as others have suggested, look up which controller it's running, and make sure it's all native ports and not SATA multipliers. Those (and overheating) are the main sources of issues on them.
As others have noted: you should switch out the sata controller to a proper LSI HBA in IT mode. However, if you plan on using all eight bays in the future, you will have to mind the limited space between the pcie slot and drive bay nr 3 from the top. Also, this case run very hot when fully populated with drives. There are 3D print stl files available to direct the side fan airflow over the discs helps a ton! If it isn't allready obvious, I am running the same case in my NAS (24/7) for the last 6 years with 11 drives. The hardest part is cable management. Hit me up if you need pointers, or recommendations on parts
Wouldn't recommend. The drive bays are extremely cheap and plastic. Drive cage is a pain to remove Everything feels like a cheap case from 20 years ago (slighly less injuries though)
do you REALLY need hotswap bays? is it mission critical that you can swap drives while you server is turned on? if you do, then the silverstone case is not really "quality" server case for mission critical applications. if you dont, then a fractal node 304/804 or even the older define R7 is more then fine. I use my my NAS as media server and my own cloud services.
@@sprocket5526 It's nothing mission critical, I just want to backup my music directory as I don't want to re-rip nearly 5000 CDs again, took me long enough to do it already. I just like the design of the case and having the drive access at the front is handy to have. As for the comment above yours about wiring being a pain, I've wired up worse looking cases than this so it's not a concern.
Be careful of those consumer grade sata cards. which i understand that you got the sata card for free, Truenas doesnt like them too much. Youre milage may vary, but i lost a pool by using these type of sata cards. Might be wiser to look at enterprise grade HBA. A Used HBA LSI-9300 doesnt cost too much and if youre using Harddrive, LSI-9200-8i.
I am also searching for a replacement of my old WD NAS so it was very nice to see you do this build. That Silverstone case was perfect for looks cooling and function. Now I need one of those too. Thanks for the great content, kitty included. DAS All
The hard drives are the components under most heat pressure there, so I'd change the airflow inside the case to have intake through the drives and out take in the back.
Those Asrock Rack boards are fantastic for home servers - I ended up getting the C246 WSI which has an i3 9100T 25W part, which supports EEC (although UDIMM, which are more of a pain to get hold of) and luckily had 8 SATA sports (4 physical, 4 via OcuLink). I went with a 2U short depth rack mount case to go into my network cabinet, and so I removed the required 12V pins from the 24 PIN ATX, reterminated them into a 4pin molex connector and only ran those, which saved a heap of space for the 24 pin cable and the included adapter.
Synology stuff is nice, I've been using one of their routers I picked up cheap and it's been the most stable trouble free thing I've ever owned. For DIY NAS there's some older LSI cards that are getting CHEAP these days that do 6GB SAS and 12GB SAS and can do SATA as well if you want to roll with new cheap drives. I picked one up that'll do 4 drives on it's single port and has 2x100GB of eMLC flash onboard for cache. It's almost 10 years old, costs 1/100th of it's original price, and Windows picked it up immediately and the BIOS setup shows 100% health on the flash cache. Too cool! I might pickup a card with more ports that's lacking the eMLC cache since I don't really need that it just looked fun to play with.
Nice video. I'd have made a few different choices but that's the beauty of home building. Everyone can build to their own specs. Note on your performance test: When you wrote 16GB and were surprised about the speed of the HDDs, your HDDs were actually idle. All 16GB went strait to memory cache. TrueNAS will use all remaining memory to cache data before writing to disk. So the true performance of your HDDs will not be apparent until you exceed the memory of your NAS.
I have two of these cases with 8 drives in each. One is a simple JBOD connected to the other. I created cardboard inserts to make sure that the 2 side fans are forced to cool the HDD, as they can get quite hot in this case without an air guide.
Just bought a CX4728 (not listed on their website, but it's just a deeper CX47xx), and it should be here tomorrow pretty excited to get it set up and figure out how best to cool it and exhaust heat.
Sliger cases are nothing special... Poor actually for a nas, no drive backplane. For the price of their case you can buy a used enterprise server. I bought a pair of Dell t320s years ago with 8 bay sas backplane and 96gb ram each for like $800(for both). They are still going strong. Replacement parts are cheap. Nothing against Sliger I own 2 of their desktop cases
@@jeremytine Their NAS specific case(s) have a 10 drive SATA backplane. I would have gone with one of the newer 5 or 6U cases Silverstone is coming out with, but they haven't released a timeline.
@@justinyoung5348 correct me if I'm wrong, but I looked at the CX4712 and it says direct wired, or aka no PCB backplane. Nor do I see any PCB in photos of case insides. Wired cable hot swap isn't the same. And you can still get a mostly complete used dell server with sas backplane, CPU, mb, ram, hba for the same price. For a nas, an older Dell that is way more tested seems like a much better value.
@@justinyoung5348 so when you get your cx4278 please report back how they manage to stabilize the loose sata/power cables to simulate a hot swap backplane, because without seeing it, it just sounds janky. Not trolling, seriously curious.
Just a note, you want to wire a 5 bay to use 5 even if the goal is to have 4 - replacing drive is correctly done without removing the old one first. That said soemtimes you do not have the option.
That's really just a matter of cosmetics. You can keep the old drive connected, if you desire, without having it in a bay for the short period you're replacing it.
@@zamadatixIt is not (edit cosmetical). Drives die around the rebuild, handling drives with no parity is prone to failure and finally the human errora related to interaction. It is objectively better to have an empty bay rather than shutting down the system amd handling rebuild that way. Of course there is no shame in doing things good enough and I am throwing no shade.
@@mckidney1 What about having an extra bay or not changes whether you have to shut the system down? Hot swapping is a function of the SATA controller, not the bay. The bay is just a nicer looking way of holding the drives long term. Whether or not you have the extra open bay or not you don't touch any of the previous drives already in bays (even the failing/failed one) until after the parity rebuild is complete. If "well, opening the case is just too risky" then the problem isn't the lack of additional open drive bays it's the lack of enough parity drives (like here using 2 drives for redundancy instead of just 1 so you keep redundancy during a failure and don't have to worry about a butterfly flapping its wings during a rebuild). Alternatively just keep a USB UASP SATA adapter around, a lot more handy than a permanently empty bay.
@@zamadatix I think context of the video lost, option was there to wire it to an already empty bay. Self-proclaimed new NAS builder did not take it. I find this argument to be made for argument sake - yes it can be done even now, but at additional effort that is applied at the time of failure. Chosing between "wire the extra bays at a cost of a SATA cable and ports you already have empty on the main board" and "You can use open the case and wire it when the drive fails" - do you honestly thing the decision comes to cosmetics? Edit: My answer is no, the difference is that one is an advice and second is an excuse. Why would you send an excuse to a new builder?
It's fine to wire it up, it's also perfectly fine to not wire it up. If you had said "You may want to wire bay 5 to save yourself time in a future drive replacement" I'd have agreed but you insisted one always wants to and gave the reason it's how you can correctly replace a drive as if it you couldn't just as correctly replace a drive without the 5th bay wired. Also note I didn't even say anything about wiring the 5th bay at all, again that's just one option of many in how to correctly replace a drive. There is so much misinformation in the NAS space because people like to describe their preferred way of doing something well as if it's the only correct way of doing something well. It's not that it's bad information, many will indeed want to wire a 5th bay (or even just all of them if they have the ports), it's just presented in the wrong light as if not doing it this way can't a valid option as well.
Happy to see some server-content, always fun for some reason. DS380 has a problem with high temp on the hdd. So i would recommend that you create/add/print a fan shroud that will force the intake fans to blow through the hdd cage, otherwise the air will just blow past them without providing any cooling. There are plenty of designs and guides if you google.
@@derekdal5185 i did something similair for my build. Used a piece of cardboard that i cut from the box it came in and some duct tape. Worked a treat for me too.
Only having it on occasionally, you'll want to make sure it's able to complete data scrubbing cycles ~once a month, especially with drives that size. Great vidya!
I've mostly moved to mini itx over the years and 12vo boards would be a godsend for working in those smaller cases. But you have to put a lot of faith in board manufacturers not to cheap out.
The hot Swap part depends on how you configure the BIOS/UEFI. I had to get a cracked one for my N54L to get HotSwap because the standard one was locked down and hid that option from me.
I built myself a NAS a few weeks ago after thinking about it and putting it off for a couple of years and it is wonderful, I really wish I had done it sooner!
same. i buildt my nas 9 months ago and i regret it havn´t buildt it years before. it´s running 15h/d on win 11 :´D it´s just a network storage for me and my wife and we´re happy. it will cost around 70/year electricity...not so bad for a 6TB "cloud" (two 6tb drives in windows storage pool) :D
That was me a few years back. A few learning, I'd like to share: HBAs work better. TureNAS does not allow for adding more disks to a pool (at least I have not figured it out), so max-out your pool size when creating it.
You have to add more VDevs to increase the pool size. The truenas pool is all VDevs added together, with redundancy being in the VDevs themselves. So raidz2 vdev, out of those drives in the vdev 2 can fail.
WOW that looked really simple to setup... GOOD JOB!!! And the fact that you get about 250-300MB/s (you said GB/s btw lol) that's faster than what I get copying files from my HDD to my Gen4x4 nvme SSD lol - AWESOME WORK!
Your hard drive is only one drive. Roman has, depending on parity strategy, 2-4x your hard drives. And data having pointers to blocks scattered at on the drives to write as they come is the most natural of fits, so the sequential speed scales linearly.
Roman! Scale is so much more than a NAS. It is really a basic hypervisor with built in apps and docker capabilities run by kubernetes. Increase your RAM to that 64 gigs and run a Jellyfin sever or PI hole or even a VPN sever/client. I've been learning it for the past year or so and I'm just a normal Gamer. It has been really awesome learning it and the Debian Linux behind it. Plus there is an amazingly supportive community behind it as well as IX systems having a basic open source policy for development of the apps and containers on it.
I have to say that I've been fiddling with Truenas for the past 2 weeks and outside of super simple NAS use; Truenas is THE WORST. Everything is incomplete and unfinished. The docs are utterly useless as a guide because they try to explain things but never provide examples or recommendations. The community sucks a lot, most threads never get resolved or they just link another thread to TNcore. RUclips tutorials never go in depth to teach you ways to do things (you'll never find a tutorial over 30mins) The experience has been a nightmare.
@mycosys as I am only running a small home lab with a few containers. It seemed unnecessary when scale could handle it. I did look at Proxmox though. Amazing system.
I have this case for my nas. I found that under a lot of load the fan setup isnt great and drives got quite warm. My solution was to take two old fan frames and use them as ducts on both intakes and with some foam on the back of the fans they seal up against the drive bays much better. If i was going to do it now I'm sure a 3d printer and some planning could get a nicer solution, but mine has been running for 10 years or so now without any problems.
For those simple requirements (e.g. no VMs, Dockers, iScsi, etc. ), I would take out-of-the-box solution any day of the week. It is cheaper, easier to support down the line, way more power efficient, same reliability and more compact....
Welcome to the NAS world, Roman!. The 10G ethernet ports are the best selling point of this board. However, Some would just be happy with 2.5G ethernet. In my case, I have a X4 Card housing 4 port 2.5G . I am running Xpengnology DSM 7.1 on a 12core/24thread Xeon with 64GB RAM under ProxMos. If you use DSM 7, you can automatically backup main NAS to second NAS without hassle. That is the benefit plus all the other apps you can house on it including Surveillance Station!.
I rebuilt my little 2 system homelab with a completely overkill Epyc system for the PCIE slots and BMC as well (full ATX board in a Fractal Define R5 though). So nice having a relatively modern board with recent IPMI that I don't need an old version of Java to access. As others have mentioned the SATA card might be a problem, but iirc occulink > sata cables are pretty inexpensive if you do run into problems or of course you could just grab an LSI HBA flashed to IT mode off ebay. Even with cables they can be had pretty cheap. That transfer speed was actually pretty slow, if you were going from > to 10G even without tweaking anything you should be seeing higher speeds unless the system you were transferring from was reading from a couple spinning disks.
You'll want to replace that SATA card with a proper HBA in IT mode for long term reliability as others have stated. You also might want to address the cooling issues this case has as others have suggested so you're not cooking your hard drives causing premature failure. Also with the wonky memory management in TrueNAS scale you'll only have 16GB of the 32GB available for ARC which may end up causing some performance issues if you are editing directly off the NAS so you may want to up that to 64GB. Memory is cheap so best to do it now while it's cost effective. There's a lot of good information and helpful/knowledgeable people available on the TrueNAS forums to point you in the right direction if you need help with anything. Nice job on the video, thanks for sharing your build.
You should always have your operating system on a ZFS mirror. The best way to do this is with two good USB 3 drives. FreeNAS does very little; it doesn't need an SSD, let alone a fast one, to work; all the work is done on the storage drives. If you don't feel comfortable using USB drives, at least put in a second cheap SSD. A common way commercial solutions do it when they don't want the OS outside is to use internal USB ports or SATA DOM sticks (which are basically USB drives over SATA).
One thing i've liked to do recently is, instead of SSD servers, is get HDDs and optane(which works with AMD when used by NAS OSes) I have a 160TB RAW 112TB usable HDD NAS, i have 4 of the 118GB Optane drives that i use as read and write cache, but i also have 2 of the 960GB 905P drives for special metadata(i plan to reverse this in the future, 4x960GB for read/write with 3x118GB for special metadata) The specialmetadata drives REALLY speed up the sense of snappiness.
@@zazelskycrest2525 I am using TrueNAS Scale, however most OSes can use optane, even windows as long as you use a 3rd party software like primo cache. The Level 1 Forums are a good place to go for Optane info, and i think they may even have a few videos on the subject.
A friend and I have recently been working on VERY similar NAS projects, using Asrock Rack's X570D4I-2T, an AM4 equivalent to what you bought, paired with the Ryzen 5 Pro 5650GE 6-core 35W part. It also has two OCuLink 4i ports, so can run 8 drives without add-in cards
Get a ups!!! Also tip is mark date of install. For battery health life span . Also stagger offline back up to. Incase you get lock out due to ransom ware.
My nas setup is a raspberry pi 400 running OMV with two 500GB SSDs, one being a backup of the other. Has been running with no hiccups for like 500 days. Consumes a miniscule amount of power, i measured it but cant recall exactly.
Silverstone make good, but in some things very outdated cases. For such a build I would reccomend Jonsbo N2 (White!), which is more elegant, well thought-out and made from aluminium. If you need all 8 HDD - then Jonsbo N3.
I didn't even know a 4 core Epyc CPU existed as they're usually monsters with the core counts, Seems very odd. Are they basically defective larger core CPUs that have just had the defective cores disabled and sold to the Embedded market?
All the other Zen 1 products have been discontinued, embedded is the only place they can go while AMD guarantees availability until 2028. They're not defective; who would buy a defective CPU and knowingly install it in an embedded system?
@@shanent5793 When i mean Defective,I dont mean not functioning! I mean they could possibly have been higher tier CPU in the line that just didn't hit the mark performance standards set so were binned and possibly modded with cores disabled from the factory...rather than get thrown out!
I have that case. It’s an actual nightmare to deal with. I hope you are able to keep the general heat under control and you never have to do maintenance on the thing.
Nice, i had the same issue needed a backup of my backup i.e if you don’t have a backup of your backup, you don’t have a backup. Used the same case, i5 intel cpu a sata port extension to give 5 mor ports installed/migrated true nas nvme from old hardware and walla. Extra storage for backup nas…
Really curious why you didn't use the oculink port for more sata ports? It would have been much more efficient to setup that way and cabling would have been much cleaner
he didnt needed more drives, thats one, two, you still need interface oculink-sata ports. Unless you were thinking about adding external hdd cage connected via oculink?
G'day Shiek, Makita & Roman, My storage is in a CoolerMaster N300 Case, it supports 1x 2.5 SSD, 8x 3.5" HDD & 2x ODD, OS is on 128GB SSD & using 3x 6TB & 5x 4TB HDDs as a back up for archiving old Home videos I have Edited for Family & Friends. I am just using W10 as the Mob had an Activated Key while hardware is i3-3220 + ASRock Z77Professional Fatal1ty (10 Sata Ports) + 8GB DDR3-1300 which were just bits & pieces leftover when doing upgrades over the years so thought I give them something to do rather than buying new parts or a NAS as they still work fine.
Interesting video. 👍 Not saure if you mention this is in the video but I’d like to know how much power the system consumes on idle and under working load. Thanks!
interesting, I have a very similar setup, same case, only I went with the EPYC3251D4I-2T I was wondering how to attach a fan to it, been running it fanless for a while Did you just run those zip ties under the heatsink and tighten it? Is it required to remove the heatsink to mount it that way? I was also thinking about trying 3d printed mounts I found in some forum, and trying to attach one of the low-profile noctua NH-L12S there, but worry that it'll not clear the drive cage then...
My only recommendations would be buying a proper HBA that is flashed to IT mode and buy more smaller capacity drives for the same capacity. Additionally make sure you have a few spare hard drives on hand for when your drives die. You buy smaller capacity dives so if a drive dies in your pool the time it takes to rebuild the data is less since the capacity is smaller. (Writing data at 250MB/s to five 4TB drives will be faster than writing to two 10TB drives in the event of a drive failure.) Wendell from Level1Techs has taught me a ton and I want to make sure I pass on what I’ve learned to others so they don’t make the same mistakes as me. (I bought four 12TB drives for fun and then I learned how long it took to build my RAID 5 array: 1000 minutes, if I went with more smaller drives it would have been done substantially faster.)
Id love to know how long it would take to rebuild one of those drives after a failure, I know that people prefer to use more smaller capacity drives because it has less time where you would be in a position that a drive failure may result in data loss
I think he's not the correct person to ask this. He kinda implied he doesn't work with NAS'es a lot and isn't that deep into both software and (home level) hardware. That's why somebody else would help him set up the thing later. I guess it must've been a mix of reasons around this. I guess scale being Linux based might play a part as well.
General wisdom with raids for a while is to keep 2 redundant drivers at least. Simply because the chances of a failure while rebuilding are pretty large. That being said an effective backup strategy is more important. If the nas is a total loss all relevant data should still be at two different physical locations ideally.
For anyone raid 5ing, I got some advice from bad experience. When the HD Drives get towards end of life, they tend to give SMART errors all at similar times. When you get the first SMART error, replace the faulty drive immediately with a new one. Reconstruction of data takes a looooong time, more than a day (possibly a few). The anxiety of waiting to reconstruct your data makes it feel like it takes much longer. Then you should assume that all drives are lost and change the rest of the drives one by one. Doing that work can take more than a week (please use a UPS) and you should pray no catastrophic failures happen in that duration. The problem with modern HDD's is that the read/write speeds are too low for their capacity. The real demo might be a dry run of how much a reconstruction scenario would take before any real data is placed in this system when/if the push comes to the shove. Raid 10 reconstructs faster, because there are less XOR operations. You could withstand to more drive losses, losing just 14 TB. The real shocker for me in this video is that , I knew Germans hybridized wolfs with dogs but hybridizing a cat with a tiger was a blasphemy man!
Just something to be aware of, nand loses its data over extended power outages. Even though you'll only be firing it up to use once every few months, probably good to boot it up once a month for a scrub to keep the nand cells fresh too.
Its great you went for the ipmi, you could script remote activation of the nas for backup then shutdown. Interesting you went for the Toshiba drives based on price a few months ago i update mine with the 18Tbs the price point has already moved on!
That's pretty much what I did with my HP Microserver. It wakes up at 10PM every day, backs up my main server then goes back to sleep. It's been a couple of years since I built it but if I remember correctly the sleep script waits for at least 5 minutes with no disk activity and CPU load below a certain percentage before shutting down. I use IPMI to wake it at other times because the machine itself is buried in a barn some distance from the building my server is in.
id recommend doing a deprecation of old videos where you put rlly old videos on your hdd nas and keep newer videos on ssd nas it so you can free up space on your ssd nas
Thank you for a new thing you are showing everyone. You have chosen wisely the hardware except for the M2 NVMe! You could have chosen a Samsung one since you will not have parity for the booting device. But the build is aesthetically pleasing and tidy. Thank you.
Heh, just filled up my NAS too. Ordered a pair of used enterprise 4TB drives to hold me over until I build a new box. Currently running Unraid but want to move to ProxMox and TrueNAS. My old AMD FX-8150 will finally be retired for good once I build that box.
great case been using one for along time now only downside is if u have an m.2 on the back of the motherboard there is no air flow ( i made a vent for mine and that solved that problem)
you do know that esd is not the horrible monster everyone says it is and that a wriststrap properly installed wont make things harder? add to it a properly grounded mat and your conerns wont matter
Are you going to move this to one of your factories after sync'ing all the data? If you could setup to sync nightly you'd have great backup. Worst case if you could bring it back once a week or two and perform a sync for changes and take it back you'd still be in good position if anything happened to your primary data.
Those cheap SATA controllers can get really hot. I have a few lying around that produce read errors when overheated and ZFS wrongly assumes the problem is with the disks and kicks them out of the array. An easy solution is to use thermal adhesive to glue a heatsink on them. With disks this big, I would have gone with RAIDZ2 instead of RAIDZ. Rebuilds can take a long time and put allot of stress on disks, there's a good chance another disk fails during the rebuild. And on a last note: Label your drive trays with the serial numbers. You'll thank yourself later...
thanks for this info :) I will make sure to check the temp of mine
@@der8auer-enyes my dear. I proud for you try you’re best you’re branes but please make more educashin four the computer. Germany xcelent four make the car but no strongly four understanding the pc. I proud you make you’re first nas but if I am honestly you need four more study the book. So no feel so much shame four no comprehenshin my sweet. Many my country ready four teach you how become excellent computer. Never give up and never remember my beauty.
Would be a good idea to swap out that Sata controller for a H310 HBA card flashed into IT mode (or the newer versions) with SAS to Sata cables.
@@RanjakarPatel उस पर ज़्यादा कठोर मत बनो. वह अपना सर्वश्रेष्ठ प्रयास कर रहा है और अन्यथा वह काफी बुद्धिमान है। उन्होंने फायर बियर कंपनी को अपना ज्वालामुखी पेस्ट बेचने में मदद की, भले ही यह विचार पहली बार भारत में विकसित किया गया था।
@@RashakantBhattachana ईमानदारी से कहूं तो इसे देखना काफी कठिन है। लेकिन वह अंदर से एक अच्छा इंसान है। और वह अपनी बुद्धि और शिक्षा के स्तर के लिए गंभीर प्रयास कर रहा है। इसलिए मैं आम तौर पर इसका आनंद लेता हूं और इसके लिए उसकी सराहना करता हूं।
"25 TB of data, that's not too much"
Spoken like a true datahoarder :)
I am a simple man. I enjoy tech content, I enjoy cats.
Same but with dogs. 👍
my cat watches me use the toilet
@@DadlyShadow Mine too, I had to install a kitty door in the bathroom because my cats wanted in so bad they would pester me till I opened the door.
I'm allergic to cats. ☹😿
@@DadlyShadow mine drinks water out of the toilet.. so yea.. cats are weird
For the PCIe SATA card, if you won't be switching to an HBA as others have suggested, look up which controller it's running, and make sure it's all native ports and not SATA multipliers. Those (and overheating) are the main sources of issues on them.
Sata controller is probably a ASM1166. It's PCIe x2. A port multiplier would have been x1 only.
Yes, it's using the ASM1166
I've been using HBAs on my unRAID NAS, zero issues with it. Rock solid.@
Can't never over appreciate such Thermal Kitty episodes.
Fun video, man. It's cool to see a total pro like yourself trying something new and making a "beginner builds NAS" video.
As others have noted: you should switch out the sata controller to a proper LSI HBA in IT mode. However, if you plan on using all eight bays in the future, you will have to mind the limited space between the pcie slot and drive bay nr 3 from the top. Also, this case run very hot when fully populated with drives. There are 3D print stl files available to direct the side fan airflow over the discs helps a ton! If it isn't allready obvious, I am running the same case in my NAS (24/7) for the last 6 years with 11 drives. The hardest part is cable management. Hit me up if you need pointers, or recommendations on parts
Thanks for highlighting that Silverstone case, I'm planning my own NAS build and was struggling to find a decent case but this checks all the boxes.
Wouldn't recommend.
The drive bays are extremely cheap and plastic.
Drive cage is a pain to remove
Everything feels like a cheap case from 20 years ago (slighly less injuries though)
Jonsbo N3?
@@zxcvb_bvcxzI had the Same experience. It's a pain in the ass to wire that thing.
do you REALLY need hotswap bays? is it mission critical that you can swap drives while you server is turned on? if you do, then the silverstone case is not really "quality" server case for mission critical applications. if you dont, then a fractal node 304/804 or even the older define R7 is more then fine. I use my my NAS as media server and my own cloud services.
@@sprocket5526 It's nothing mission critical, I just want to backup my music directory as I don't want to re-rip nearly 5000 CDs again, took me long enough to do it already. I just like the design of the case and having the drive access at the front is handy to have.
As for the comment above yours about wiring being a pain, I've wired up worse looking cases than this so it's not a concern.
Be careful of those consumer grade sata cards. which i understand that you got the sata card for free, Truenas doesnt like them too much.
Youre milage may vary, but i lost a pool by using these type of sata cards.
Might be wiser to look at enterprise grade HBA. A Used HBA LSI-9300 doesnt cost too much and if youre using Harddrive, LSI-9200-8i.
100% this or just use oculink which is already on board.
@@twiikker yup!
Agree. HBA card is a better choice
I am also searching for a replacement of my old WD NAS so it was very nice to see you do this build. That Silverstone case was perfect for looks cooling and function. Now I need one of those too. Thanks for the great content, kitty included. DAS All
The hard drives are the components under most heat pressure there, so I'd change the airflow inside the case to have intake through the drives and out take in the back.
knowing that he will use it as a backup not very often and knowing he will not use them hard, it is inncecesary to do that
That's literally how it's set up.
Great video and love your cat being in the frame. Every PC channel should have a resident / co-presenting cat!
Those Asrock Rack boards are fantastic for home servers - I ended up getting the C246 WSI which has an i3 9100T 25W part, which supports EEC (although UDIMM, which are more of a pain to get hold of) and luckily had 8 SATA sports (4 physical, 4 via OcuLink).
I went with a 2U short depth rack mount case to go into my network cabinet, and so I removed the required 12V pins from the 24 PIN ATX, reterminated them into a 4pin molex connector and only ran those, which saved a heap of space for the 24 pin cable and the included adapter.
Synology stuff is nice, I've been using one of their routers I picked up cheap and it's been the most stable trouble free thing I've ever owned. For DIY NAS there's some older LSI cards that are getting CHEAP these days that do 6GB SAS and 12GB SAS and can do SATA as well if you want to roll with new cheap drives. I picked one up that'll do 4 drives on it's single port and has 2x100GB of eMLC flash onboard for cache. It's almost 10 years old, costs 1/100th of it's original price, and Windows picked it up immediately and the BIOS setup shows 100% health on the flash cache. Too cool! I might pickup a card with more ports that's lacking the eMLC cache since I don't really need that it just looked fun to play with.
Nice video. I'd have made a few different choices but that's the beauty of home building. Everyone can build to their own specs.
Note on your performance test: When you wrote 16GB and were surprised about the speed of the HDDs, your HDDs were actually idle. All 16GB went strait to memory cache. TrueNAS will use all remaining memory to cache data before writing to disk. So the true performance of your HDDs will not be apparent until you exceed the memory of your NAS.
For 4-5 HDD You could use a Jonsbo NAS cases that are much more compact and nicer
Yeap, and for 8 HDD they have N3 already.
I didn't even know about them. Just checked and they indeed look really cool. Will remember that- thanks!
I'm still rocking a Node 304. A bit bulky.
If I had a 3d printer, the mods they are making are legit.
I have two of these cases with 8 drives in each.
One is a simple JBOD connected to the other.
I created cardboard inserts to make sure that the 2 side fans are forced to cool the HDD, as they can get quite hot in this case without an air guide.
the rackmount cases from Sliger have been making a home NAS very tempting for a while
Just bought a CX4728 (not listed on their website, but it's just a deeper CX47xx), and it should be here tomorrow pretty excited to get it set up and figure out how best to cool it and exhaust heat.
Sliger cases are nothing special... Poor actually for a nas, no drive backplane. For the price of their case you can buy a used enterprise server. I bought a pair of Dell t320s years ago with 8 bay sas backplane and 96gb ram each for like $800(for both). They are still going strong. Replacement parts are cheap.
Nothing against Sliger I own 2 of their desktop cases
@@jeremytine Their NAS specific case(s) have a 10 drive SATA backplane. I would have gone with one of the newer 5 or 6U cases Silverstone is coming out with, but they haven't released a timeline.
@@justinyoung5348 correct me if I'm wrong, but I looked at the CX4712 and it says direct wired, or aka no PCB backplane. Nor do I see any PCB in photos of case insides. Wired cable hot swap isn't the same. And you can still get a mostly complete used dell server with sas backplane, CPU, mb, ram, hba for the same price. For a nas, an older Dell that is way more tested seems like a much better value.
@@justinyoung5348 so when you get your cx4278 please report back how they manage to stabilize the loose sata/power cables to simulate a hot swap backplane, because without seeing it, it just sounds janky. Not trolling, seriously curious.
Just a note, you want to wire a 5 bay to use 5 even if the goal is to have 4 - replacing drive is correctly done without removing the old one first. That said soemtimes you do not have the option.
That's really just a matter of cosmetics. You can keep the old drive connected, if you desire, without having it in a bay for the short period you're replacing it.
@@zamadatixIt is not (edit cosmetical). Drives die around the rebuild, handling drives with no parity is prone to failure and finally the human errora related to interaction. It is objectively better to have an empty bay rather than shutting down the system amd handling rebuild that way. Of course there is no shame in doing things good enough and I am throwing no shade.
@@mckidney1 What about having an extra bay or not changes whether you have to shut the system down? Hot swapping is a function of the SATA controller, not the bay. The bay is just a nicer looking way of holding the drives long term. Whether or not you have the extra open bay or not you don't touch any of the previous drives already in bays (even the failing/failed one) until after the parity rebuild is complete.
If "well, opening the case is just too risky" then the problem isn't the lack of additional open drive bays it's the lack of enough parity drives (like here using 2 drives for redundancy instead of just 1 so you keep redundancy during a failure and don't have to worry about a butterfly flapping its wings during a rebuild). Alternatively just keep a USB UASP SATA adapter around, a lot more handy than a permanently empty bay.
@@zamadatix I think context of the video lost, option was there to wire it to an already empty bay. Self-proclaimed new NAS builder did not take it. I find this argument to be made for argument sake - yes it can be done even now, but at additional effort that is applied at the time of failure. Chosing between "wire the extra bays at a cost of a SATA cable and ports you already have empty on the main board" and "You can use open the case and wire it when the drive fails" - do you honestly thing the decision comes to cosmetics? Edit: My answer is no, the difference is that one is an advice and second is an excuse. Why would you send an excuse to a new builder?
It's fine to wire it up, it's also perfectly fine to not wire it up. If you had said "You may want to wire bay 5 to save yourself time in a future drive replacement" I'd have agreed but you insisted one always wants to and gave the reason it's how you can correctly replace a drive as if it you couldn't just as correctly replace a drive without the 5th bay wired. Also note I didn't even say anything about wiring the 5th bay at all, again that's just one option of many in how to correctly replace a drive.
There is so much misinformation in the NAS space because people like to describe their preferred way of doing something well as if it's the only correct way of doing something well. It's not that it's bad information, many will indeed want to wire a 5th bay (or even just all of them if they have the ports), it's just presented in the wrong light as if not doing it this way can't a valid option as well.
Happy to see some server-content, always fun for some reason.
DS380 has a problem with high temp on the hdd. So i would recommend that you create/add/print a fan shroud that will force the intake fans to blow through the hdd cage, otherwise the air will just blow past them without providing any cooling.
There are plenty of designs and guides if you google.
I put a foam baffle to block off the path to MB and force air through the HDDs, worked great
@@derekdal5185
i did something similair for my build. Used a piece of cardboard that i cut from the box it came in and some duct tape. Worked a treat for me too.
Only having it on occasionally, you'll want to make sure it's able to complete data scrubbing cycles ~once a month, especially with drives that size.
Great vidya!
I've mostly moved to mini itx over the years and 12vo boards would be a godsend for working in those smaller cases. But you have to put a lot of faith in board manufacturers not to cheap out.
The hot Swap part depends on how you configure the BIOS/UEFI.
I had to get a cracked one for my N54L to get HotSwap because
the standard one was locked down and hid that option from me.
I built myself a NAS a few weeks ago after thinking about it and putting it off for a couple of years and it is wonderful, I really wish I had done it sooner!
same. i buildt my nas 9 months ago and i regret it havn´t buildt it years before. it´s running 15h/d on win 11 :´D it´s just a network storage for me and my wife and we´re happy. it will cost around 70/year electricity...not so bad for a 6TB "cloud" (two 6tb drives in windows storage pool) :D
Nice video, a nice addition would be to add a used parts list and price inidications.
That was me a few years back. A few learning, I'd like to share:
HBAs work better.
TureNAS does not allow for adding more disks to a pool (at least I have not figured it out), so max-out your pool size when creating it.
You have to add more VDevs to increase the pool size. The truenas pool is all VDevs added together, with redundancy being in the VDevs themselves. So raidz2 vdev, out of those drives in the vdev 2 can fail.
3:29 Hope to see more SilverStone brand placement, they were a bit shy on advertisements but they make good products!
WOW that looked really simple to setup... GOOD JOB!!! And the fact that you get about 250-300MB/s (you said GB/s btw lol) that's faster than what I get copying files from my HDD to my Gen4x4 nvme SSD lol - AWESOME WORK!
Your hard drive is only one drive.
Roman has, depending on parity strategy, 2-4x your hard drives. And data having pointers to blocks scattered at on the drives to write as they come is the most natural of fits, so the sequential speed scales linearly.
Roman! Scale is so much more than a NAS. It is really a basic hypervisor with built in apps and docker capabilities run by kubernetes. Increase your RAM to that 64 gigs and run a Jellyfin sever or PI hole or even a VPN sever/client. I've been learning it for the past year or so and I'm just a normal Gamer. It has been really awesome learning it and the Debian Linux behind it. Plus there is an amazingly supportive community behind it as well as IX systems having a basic open source policy for development of the apps and containers on it.
I have to say that I've been fiddling with Truenas for the past 2 weeks and outside of super simple NAS use; Truenas is THE WORST. Everything is incomplete and unfinished. The docs are utterly useless as a guide because they try to explain things but never provide examples or recommendations. The community sucks a lot, most threads never get resolved or they just link another thread to TNcore.
RUclips tutorials never go in depth to teach you ways to do things (you'll never find a tutorial over 30mins)
The experience has been a nightmare.
tried proxmox yet? Also debian but a full virtualisation platform as well as the container support and some standard deploys
@mycosys as I am only running a small home lab with a few containers. It seemed unnecessary when scale could handle it. I did look at Proxmox though. Amazing system.
I have this case for my nas. I found that under a lot of load the fan setup isnt great and drives got quite warm. My solution was to take two old fan frames and use them as ducts on both intakes and with some foam on the back of the fans they seal up against the drive bays much better.
If i was going to do it now I'm sure a 3d printer and some planning could get a nicer solution, but mine has been running for 10 years or so now without any problems.
Thanks for putting your videos out there for us. Some say its not “perfect”, I dont care. Still enjoyable to watch. Like explainations you give.
Came for the build, stayed for the cat ❤
For those simple requirements (e.g. no VMs, Dockers, iScsi, etc. ), I would take out-of-the-box solution any day of the week.
It is cheaper, easier to support down the line, way more power efficient, same reliability and more compact....
4:45 That's a big chonky cat! (beautiful)
Welcome to the NAS world, Roman!. The 10G ethernet ports are the best selling point of this board. However, Some would just be happy with 2.5G ethernet. In my case, I have a X4 Card housing 4 port 2.5G . I am running Xpengnology DSM 7.1 on a 12core/24thread Xeon with 64GB RAM under ProxMos. If you use DSM 7, you can automatically backup main NAS to second NAS without hassle. That is the benefit plus all the other apps you can house on it including Surveillance Station!.
I rebuilt my little 2 system homelab with a completely overkill Epyc system for the PCIE slots and BMC as well (full ATX board in a Fractal Define R5 though). So nice having a relatively modern board with recent IPMI that I don't need an old version of Java to access. As others have mentioned the SATA card might be a problem, but iirc occulink > sata cables are pretty inexpensive if you do run into problems or of course you could just grab an LSI HBA flashed to IT mode off ebay. Even with cables they can be had pretty cheap.
That transfer speed was actually pretty slow, if you were going from > to 10G even without tweaking anything you should be seeing higher speeds unless the system you were transferring from was reading from a couple spinning disks.
The IPMI port is usually bridged to the 10G ports, so you can still access the remote management portal.
You'll want to replace that SATA card with a proper HBA in IT mode for long term reliability as others have stated. You also might want to address the cooling issues this case has as others have suggested so you're not cooking your hard drives causing premature failure. Also with the wonky memory management in TrueNAS scale you'll only have 16GB of the 32GB available for ARC which may end up causing some performance issues if you are editing directly off the NAS so you may want to up that to 64GB. Memory is cheap so best to do it now while it's cost effective. There's a lot of good information and helpful/knowledgeable people available on the TrueNAS forums to point you in the right direction if you need help with anything. Nice job on the video, thanks for sharing your build.
How did you get your hands on that board? The only one i could find was over $500
You should always have your operating system on a ZFS mirror. The best way to do this is with two good USB 3 drives. FreeNAS does very little; it doesn't need an SSD, let alone a fast one, to work; all the work is done on the storage drives. If you don't feel comfortable using USB drives, at least put in a second cheap SSD. A common way commercial solutions do it when they don't want the OS outside is to use internal USB ports or SATA DOM sticks (which are basically USB drives over SATA).
One thing i've liked to do recently is, instead of SSD servers, is get HDDs and optane(which works with AMD when used by NAS OSes)
I have a 160TB RAW 112TB usable HDD NAS, i have 4 of the 118GB Optane drives that i use as read and write cache, but i also have 2 of the 960GB 905P drives for special metadata(i plan to reverse this in the future, 4x960GB for read/write with 3x118GB for special metadata)
The specialmetadata drives REALLY speed up the sense of snappiness.
Hi, may I know which Nas OS that work with optane? Also, do you have some guide or on using Optane for NAS and the special metadata?
@@zazelskycrest2525 I am using TrueNAS Scale, however most OSes can use optane, even windows as long as you use a 3rd party software like primo cache.
The Level 1 Forums are a good place to go for Optane info, and i think they may even have a few videos on the subject.
A friend and I have recently been working on VERY similar NAS projects, using Asrock Rack's X570D4I-2T, an AM4 equivalent to what you bought, paired with the Ryzen 5 Pro 5650GE 6-core 35W part. It also has two OCuLink 4i ports, so can run 8 drives without add-in cards
do you have a link to the mobo and case ? also no kitty pats ?
DerCat as tech support
7:06 cat tax ❤
Get a ups!!! Also tip is mark date of install. For battery health life span . Also stagger offline back up to. Incase you get lock out due to ransom ware.
you are assuming he doesnt have a ups
My nas setup is a raspberry pi 400 running OMV with two 500GB SSDs, one being a backup of the other. Has been running with no hiccups for like 500 days. Consumes a miniscule amount of power, i measured it but cant recall exactly.
For the PSU I would recommend using the redundant atx PSU "FSP Twins "
talking about overkill PSU :D
Silverstone make good, but in some things very outdated cases. For such a build I would reccomend Jonsbo N2 (White!), which is more elegant, well thought-out and made from aluminium. If you need all 8 HDD - then Jonsbo N3.
I didn't even know a 4 core Epyc CPU existed as they're usually monsters with the core counts, Seems very odd. Are they basically defective larger core CPUs that have just had the defective cores disabled and sold to the Embedded market?
The embedded market actually has extremely high requirements for temperature tolerance, shock, etc. so I doubt they use defective/binned down dies.
All the other Zen 1 products have been discontinued, embedded is the only place they can go while AMD guarantees availability until 2028. They're not defective; who would buy a defective CPU and knowingly install it in an embedded system?
@@shanent5793 When i mean Defective,I dont mean not functioning! I mean they could possibly have been higher tier CPU in the line that just didn't hit the mark performance standards set so were binned and possibly modded with cores disabled from the factory...rather than get thrown out!
Is the fan in the rear intake or out?
If it is in as well, you could just have ducted from there onto the CPU Heat Sink right?
out. In on the side towards the HDDs and exhaust for warm air
TrueNAS Scale - excellent choice
Cat. I came for tech, I stayed for cat.
I have that case. It’s an actual nightmare to deal with. I hope you are able to keep the general heat under control and you never have to do maintenance on the thing.
Very cool case!
13:36 - "and with a speed of 250 - 300 gigabyte"😵💫
SSD cache drive would be great on this.
Nice, i had the same issue needed a backup of my backup i.e if you don’t have a backup of your backup, you don’t have a backup. Used the same case, i5 intel cpu a sata port extension to give 5 mor ports installed/migrated true nas nvme from old hardware and walla. Extra storage for backup nas…
Very cute kitten. Thanks!
Really curious why you didn't use the oculink port for more sata ports? It would have been much more efficient to setup that way and cabling would have been much cleaner
he didnt needed more drives, thats one, two, you still need interface oculink-sata ports. Unless you were thinking about adding external hdd cage connected via oculink?
I was also looking into building my own NAS myself but my old motherboard and cpu is dead so gotta look it some time later
G'day Shiek, Makita & Roman,
My storage is in a CoolerMaster N300 Case, it supports 1x 2.5 SSD, 8x 3.5" HDD & 2x ODD, OS is on 128GB SSD & using 3x 6TB & 5x 4TB HDDs as a back up for archiving old Home videos I have Edited for Family & Friends.
I am just using W10 as the Mob had an Activated Key while hardware is i3-3220 + ASRock Z77Professional Fatal1ty (10 Sata Ports) + 8GB DDR3-1300 which were just bits & pieces leftover when doing upgrades over the years so thought I give them something to do rather than buying new parts or a NAS as they still work fine.
Interesting video. 👍 Not saure if you mention this is in the video but I’d like to know how much power the system consumes on idle and under working load. Thanks!
Taking a long time to use an ISO from the KVM console is an asrock rack thing with that Aspeed BMC
Could you introduce a parts list in your video description
interesting, I have a very similar setup, same case, only I went with the EPYC3251D4I-2T
I was wondering how to attach a fan to it, been running it fanless for a while
Did you just run those zip ties under the heatsink and tighten it? Is it required to remove the heatsink to mount it that way?
I was also thinking about trying 3d printed mounts I found in some forum, and trying to attach one of the low-profile noctua NH-L12S there, but worry that it'll not clear the drive cage then...
Nicely done Roman ... Linus ain't got nut'n on you!! One suggestion though ... More Cat Time, plz.
My only recommendations would be buying a proper HBA that is flashed to IT mode and buy more smaller capacity drives for the same capacity. Additionally make sure you have a few spare hard drives on hand for when your drives die.
You buy smaller capacity dives so if a drive dies in your pool the time it takes to rebuild the data is less since the capacity is smaller. (Writing data at 250MB/s to five 4TB drives will be faster than writing to two 10TB drives in the event of a drive failure.)
Wendell from Level1Techs has taught me a ton and I want to make sure I pass on what I’ve learned to others so they don’t make the same mistakes as me. (I bought four 12TB drives for fun and then I learned how long it took to build my RAID 5 array: 1000 minutes, if I went with more smaller drives it would have been done substantially faster.)
Id love to know how long it would take to rebuild one of those drives after a failure, I know that people prefer to use more smaller capacity drives because it has less time where you would be in a position that a drive failure may result in data loss
4-6TB is the sweetspot
What is the model number of the ASRock mothoerboard? I can't find it in their website, thanks!
Hey man really nice build, do you know what power consumption is load and idle?
Why did you choose Truenas Scale if you only want to use as storage instead of using truenas Core?
I think he's not the correct person to ask this. He kinda implied he doesn't work with NAS'es a lot and isn't that deep into both software and (home level) hardware. That's why somebody else would help him set up the thing later. I guess it must've been a mix of reasons around this. I guess scale being Linux based might play a part as well.
The motherboard was overkill, but it looks used, so I am not sure about the real cost :)
General wisdom with raids for a while is to keep 2 redundant drivers at least.
Simply because the chances of a failure while rebuilding are pretty large.
That being said an effective backup strategy is more important. If the nas is a total loss all relevant data should still be at two different physical locations ideally.
For anyone raid 5ing, I got some advice from bad experience. When the HD Drives get towards end of life, they tend to give SMART errors all at similar times. When you get the first SMART error, replace the faulty drive immediately with a new one. Reconstruction of data takes a looooong time, more than a day (possibly a few). The anxiety of waiting to reconstruct your data makes it feel like it takes much longer. Then you should assume that all drives are lost and change the rest of the drives one by one. Doing that work can take more than a week (please use a UPS) and you should pray no catastrophic failures happen in that duration. The problem with modern HDD's is that the read/write speeds are too low for their capacity. The real demo might be a dry run of how much a reconstruction scenario would take before any real data is placed in this system when/if the push comes to the shove.
Raid 10 reconstructs faster, because there are less XOR operations. You could withstand to more drive losses, losing just 14 TB.
The real shocker for me in this video is that , I knew Germans hybridized wolfs with dogs but hybridizing a cat with a tiger was a blasphemy man!
Just something to be aware of, nand loses its data over extended power outages. Even though you'll only be firing it up to use once every few months, probably good to boot it up once a month for a scrub to keep the nand cells fresh too.
What are you talking about? His HDD array? Or his boot drive lol
How did the fins on the embedded CPU get bent?
list of parts please that case its really usefull
Its great you went for the ipmi, you could script remote activation of the nas for backup then shutdown. Interesting you went for the Toshiba drives based on price a few months ago i update mine with the 18Tbs the price point has already moved on!
That's pretty much what I did with my HP Microserver. It wakes up at 10PM every day, backs up my main server then goes back to sleep. It's been a couple of years since I built it but if I remember correctly the sleep script waits for at least 5 minutes with no disk activity and CPU load below a certain percentage before shutting down. I use IPMI to wake it at other times because the machine itself is buried in a barn some distance from the building my server is in.
Cat is HUGE!
No digital video outs? I've still got a CRT but some LCDs don't have VGA inputs anymore..
id recommend doing a deprecation of old videos where you put rlly old videos on your hdd nas and keep newer videos on ssd nas it so you can free up space on your ssd nas
can you list the component parts in the description? it makes easier to find it and taking advantage from your experience. Thanks.
yea w/e happen to the 12vo thing that was heavily pushed a couple of years back for the consumer market?
cats are awesome! 🙂
Thank you for a new thing you are showing everyone. You have chosen wisely the hardware except for the M2 NVMe! You could have chosen a Samsung one since you will not have parity for the booting device. But the build is aesthetically pleasing and tidy. Thank you.
Thanks :)
motherboard model?
Heh, just filled up my NAS too. Ordered a pair of used enterprise 4TB drives to hold me over until I build a new box. Currently running Unraid but want to move to ProxMox and TrueNAS. My old AMD FX-8150 will finally be retired for good once I build that box.
How do you set up all the drives to run in raid
great case been using one for along time now only downside is if u have an m.2 on the back of the motherboard there is no air flow ( i made a vent for mine and that solved that problem)
What is the energy consumption of such a NAS?
probably less than 100W
Cool setup and vid, I would have liked to see you install truenas to a usb drive and then use the pcie drive as a cache or md cache tho
What about power efficiency? Power consumption?
Petting a cat during building can transfer static build up....
you do know that esd is not the horrible monster everyone says it is and that a wriststrap properly installed wont make things harder? add to it a properly grounded mat and your conerns wont matter
What is the model of that board please?
More CAT! Hell yeah!
you could add a SSD as a cache vdev on your pool and boost those transfer speeds. but as it is it's pretty good.
Are you going to move this to one of your factories after sync'ing all the data? If you could setup to sync nightly you'd have great backup. Worst case if you could bring it back once a week or two and perform a sync for changes and take it back you'd still be in good position if anything happened to your primary data.
Just what I'm looking for😊
And the power usage?
could you maybe list the parts used please?