No hardware Raid controllers just SAS 9305i HBA's! Where's the data protection (Cache protection via batttery/capacitor pack) on power failure, should the UPS fail or server crash?
Went to site, followed the quick guides to the build page, seen the starting price of over $10K USD, closed page very quickly on decision to stick with DAS and JBOD chassis.
Thanks for this quick overview but I do have a question. This case doesn't seem to come with cable management rails. So when a drive fails, how can you easily remove the chassis from the rack and not worry about a cable getting stuck and unplug, or worse? If everything is so mission critical, wouldn't a storage solution with 'hotswap' bays in the front make more sense? I am pretty sure you discussed this with the client so I do not mean to criticize, I am just wondering about the descisionmaking proces.
45Drives can provide a cable management arm, but doesn’t fit well on the XL60 due to the length of the chassis. This is generally not an issue though but it is definitely a consideration that must be had.
How do YOU guys burn-in HDDs, or arrays of 'em - each drive individually, the whole array, the full capacity? Currently my home lab procedure is: short SMART, long SMART, badblocks -sw -t random ([data-]destructive write test), and another long SMART test.
Do you know how the support in Europe is? I am still looking for similar style storage servers to run CEPH on for lower speed applications (like backups of the high speed clusters), but I really don't want anything that I can't get parts for when I need to get it fixed
Do you have a recommendation in terms of the number of CPU cores per TB of storage on TrueNAS? Or a certain clock speed per core per TB of storage on TrueNAS? Thanks.
I’m sure the Xeon Silver whatever 12 or 24 core included is more than up to the task. More cores and more clock speed rarely help with pure file serving on a 25GbE network.
@@mdd1963 That REALLY depends on the implementation of the 25 GbE. If you DON'T have RDMA and/or RoCE available and enabled, the CPU can very quickly become the bottleneck (which was why RDMA and RoCE was developed in the first place). Without that, it kinda doesn't really matter HOW powerful the CPU is, nor how many cores said CPU has because the transmission of network traffic cannot be parallelised such that it would take advantage of the parallel processing potential of the multi-core CPU.
I've been out of the data center storage world for a while. How do the XL60's compare with the major storage players like NetApp, EMC, IBM, HPE, etc. Or is this a case where the client is avoiding the major players? Also, I'm surprised that you don't build and test these on-site. It certainly can get installed faster and reduces the liability of shipping things twice.
Pricing is not that far off, but you get the huge advantage of not buying proprietary hardware (mainly the motherboard and chassis, of course). Since apparently this specific server is going to a technically capable person/team, it makes a lot of sense since it would be a lot easier and cheaper to substitute faulty hardware instead of having to pay for support from the the "big players" which is not really economical since it's probably just the one server.
Hi Tom, I was wondering what happens in the event of a power outage, the drives get somehow corrupted when you use HBA cards?, I know that raid cards use batteries included in the Card but how about it in this project?
The ZFS file system takes care of it. I forget the reference name; but ZFS doesn't consider it a successful write until it's verified. So in the event of a power failure, all the writes that weren't verified are considered to never have been written.
The 60 drive chassis from supermicro? Pretty decent. Pricy though and you only can get it with drives already included (which also come at a premium :)). If you're on a budget I'd go for the Storinator. The supermicro one has the advantage of 2,5" slots for ssds though. With that many drives i'd always add 3 mirrored ssds as metadata storage.
where are you buying your drives from ? can't get drives here in the UK no stock, bought some second hand enterprise drives but giving me nothing but problems. ZFS sucks! Can only get SMR Drives here in the UK unless you use NAS drives but useless for block storage
@@LAWRENCESYSTEMS That was more of a joke about how heavy it is and how expensive it ill be to drop it xD I presume these drives might be best to be loaded while chassis is already mounted in the rack. Fiddling 77kg even with 4 people onto rails is simple not reasonable, unless you have a cart designed to lift these for rack mounting
@@kingneutron1 I know all about Seagate's product lines. I've worked with thousands of their drives over the years. I wouldn't recommend them to anyone.
The majority of the cost of this system is the drives and while buying refurbs for home use is nice. A. They are not readily available in the 18tb flavor B. They don't have as extensive of warranty C. Scraping the internet to save a few thousand dollars is not really worth the time in all environments. Plus in the grand scheme of things this is one of the more inexpensive solutions out there since there is no recurring cost to use the system.
@@tw3145wallenstein I mean you can get similar hardware at a large discount not counting the drives. 9 k system cost here you about 3k used market.. But yes of course no warranty or support.
System specs
Dual power supplies
Supermicro X11SPL-F
Intel Xeon Silver 4216 CPU @ 2.10GHz
256G ECC DDR4 Memory
Four SAS9305-16i HBA Cards
Sixty Seagate Exos 18TB Enterprise Drives
Intel Ethernet Network Adapter XXV710-DA2 (25GB)
How to Layout 60 Hard Drives in a ZFS Pool & Benchmarking Performance.
ruclips.net/video/h4ocFY-BJAQ/видео.html
Storinator Q30 Reivew
ruclips.net/video/3OpwUnuL6Zk/видео.html
TrueNAS Tutorials
lawrence.technology/truenas-tutorials/
Such a cool machine. Thanks for posting the specs.
No hardware Raid controllers just SAS 9305i HBA's! Where's the data protection (Cache protection via batttery/capacitor pack)
on power failure, should the UPS fail or server crash?
That poor rj45 cable is giving me the OCD shakes. :-)
I paused to check the comments before asking the same. 🤣🤣🤣
...and where's the boot for the cable!? 😡🤣
I used it just to see who is paying attention. 😎
@@LAWRENCESYSTEMS Oh the humanity
Please think of the children 🤣
Oh and Mike don't forget it's not OCD... it's CDO... Because that's in order the way everything should be! 👇🧐
🤣🤣🤣
Now this is my kind of video.
whomever terminated that Ethernet cable need to go back to school on it :-)
1:50 That ethernet cable ;.;
cost saving measures ...
I was hoping someone would notice.
@@LAWRENCESYSTEMS trust us: we do.
@@LAWRENCESYSTEMS I even noticed over here... And I'm from Belgium!
@@LAWRENCESYSTEMS I helped teach networking to high schoolers. That @#$% doesn't pass!
01:46 That termination! *twitch twitch* hehehehehe
Just two more of those and you should be able to run Flight Simulator 2020 locally
This is beatiful
Now back it all up too! :)
XD
Nice video! I would love a collab with Linus tech tips on storage!
Went to site, followed the quick guides to the build page, seen the starting price of over $10K USD, closed page very quickly on decision to stick with DAS and JBOD chassis.
Still that would be less then the cost to store this on aws s3 for a month
@@toddfisher8248 Yeah, you don't store the amounts I or others have on cloud stuff. But, these systems are still very expensive.
Drives are not cheap, and are by far the lion’s share of the cost….
Thanks for this quick overview but I do have a question. This case doesn't seem to come with cable management rails. So when a drive fails, how can you easily remove the chassis from the rack and not worry about a cable getting stuck and unplug, or worse? If everything is so mission critical, wouldn't a storage solution with 'hotswap' bays in the front make more sense? I am pretty sure you discussed this with the client so I do not mean to criticize, I am just wondering about the descisionmaking proces.
45Drives can provide a cable management arm, but doesn’t fit well on the XL60 due to the length of the chassis. This is generally not an issue though but it is definitely a consideration that must be had.
It's really not that big of a deal, for this use case the rack is not completely filled so there is enough space between servers.
That is good for lots of Linux ISOs
How do YOU guys burn-in HDDs, or arrays of 'em - each drive individually, the whole array, the full capacity?
Currently my home lab procedure is: short SMART, long SMART, badblocks -sw -t random ([data-]destructive write test), and another long SMART test.
I just spent 100k on 45 drives server 😅 got 6 s45 for a CEPH project with over a petabyte of acutal usable space 👍
Now fill it with 100 TB Nimbus Exadrives!
With the cost of the appliance, is the customer purchasing it and shipping it to you, or do you purchase and maybe take a deposit, or something else?
Do you know how the support in Europe is? I am still looking for similar style storage servers to run CEPH on for lower speed applications (like backups of the high speed clusters), but I really don't want anything that I can't get parts for when I need to get it fixed
I really don't know as we have only sold this to US and Canada clients
What's the weight with all 60 HDDs - and does it need special heavy-duty rails to hold it?
About 90LBs of hard drives and 70lbs of case
Do you have a recommendation in terms of the number of CPU cores per TB of storage on TrueNAS?
Or a certain clock speed per core per TB of storage on TrueNAS?
Thanks.
I’m sure the Xeon Silver whatever 12 or 24 core included is more than up to the task. More cores and more clock speed rarely help with pure file serving on a 25GbE network.
@@mdd1963
That REALLY depends on the implementation of the 25 GbE.
If you DON'T have RDMA and/or RoCE available and enabled, the CPU can very quickly become the bottleneck (which was why RDMA and RoCE was developed in the first place).
Without that, it kinda doesn't really matter HOW powerful the CPU is, nor how many cores said CPU has because the transmission of network traffic cannot be parallelised such that it would take advantage of the parallel processing potential of the multi-core CPU.
I've been out of the data center storage world for a while. How do the XL60's compare with the major storage players like NetApp, EMC, IBM, HPE, etc. Or is this a case where the client is avoiding the major players?
Also, I'm surprised that you don't build and test these on-site. It certainly can get installed faster and reduces the liability of shipping things twice.
Pricing is not that far off, but you get the huge advantage of not buying proprietary hardware (mainly the motherboard and chassis, of course).
Since apparently this specific server is going to a technically capable person/team, it makes a lot of sense since it would be a lot easier and cheaper to substitute faulty hardware instead of having to pay for support from the the "big players" which is not really economical since it's probably just the one server.
What burnin testing programs do you use?
Phoronix Test suite and large data dumps
@@LAWRENCESYSTEMS Thank you. I have an Ebay parted-together Supermicro 24 bay to test.
Hi Tom, I was wondering what happens in the event of a power outage, the drives get somehow corrupted when you use HBA cards?, I know that raid cards use batteries included in the Card but how about it in this project?
As I said in the video, the HBA cards ARE NOT providing raid functions so ZFS handles the raid integrity.
The ZFS file system takes care of it. I forget the reference name; but ZFS doesn't consider it a successful write until it's verified. So in the event of a power failure, all the writes that weren't verified are considered to never have been written.
@@joebleed ZFS is a "Copy on Write" or COW file system.
Data centers will have redundant battery backups and on-site generators and stuff like that to mitigate issues caused by power outages.
@@LAWRENCESYSTEMS That's what i was trying to remember.
Can anyone recommend a 1-2u rack mount case for a smaller scale NAS?
How do you backup something like this? I struggle to find a way to backup my 16tb nas
They are getting more of them and using ZFS replication.
What about supermicro's complete offering, as this is already using their board?
I have not used it
The 60 drive chassis from supermicro? Pretty decent. Pricy though and you only can get it with drives already included (which also come at a premium :)). If you're on a budget I'd go for the Storinator. The supermicro one has the advantage of 2,5" slots for ssds though. With that many drives i'd always add 3 mirrored ssds as metadata storage.
Can this device be used for cloud hosting storage device???
If yes , can you make a video on how it can be done
Cloud hosting is a generic term but you would load what ever you want to host on it.
where are you buying your drives from ? can't get drives here in the UK no stock, bought some second hand enterprise drives but giving me nothing but problems. ZFS sucks! Can only get SMR Drives here in the UK unless you use NAS drives but useless for block storage
Why not use ceph. Is truenas that much better?
Why implement a more complex solution than needed?
How much did it weight? A lot
How much did it cost you? Everything
About 170LBS and I don't put pricing in video because prices change and they are listed on the 45 drives site.
@@LAWRENCESYSTEMS That was more of a joke about how heavy it is and how expensive it ill be to drop it xD
I presume these drives might be best to be loaded while chassis is already mounted in the rack. Fiddling 77kg even with 4 people onto rails is simple not reasonable, unless you have a cart designed to lift these for rack mounting
Yummy, that'd make a sexxy chia farm
It's for a chia farmer, isn't it?
So no special metadata vdevs or small block caching on NVMe?
Not for this project
And ya did this all without linus. I'm proud 🙂
Linus would have showed me how to hot glue it..
@@LAWRENCESYSTEMS "I understood that reference"
That's a lot of Linux ISO's ...
What are ya gonna do download the whole internet to it, truenas wow you really trust it, great, beats Win server
While using this you can cook pancakes from
The heat
What a terrible heresy, not properly crimping networking cables! LOL
I used that cable to see if people were paying attention.
PSA: Seagate is a mistake. That is all.
There's a huge difference between desktop-rated, NAS and Enterprise drives. Ironwolf is solid and EXOS is Enterprise
@@kingneutron1 I know all about Seagate's product lines. I've worked with thousands of their drives over the years. I wouldn't recommend them to anyone.
PSA: Seagate EXOs enterprise drives are fine. That is all. I've worked with hundreds of their drives, possibly thousands
@@StephenDeTomasi I've worked with thousands of segate drives a month over the course of 15 years. Never trust them for anything important.
As nice as this system is, seems insane to spend that much when you can get used hardware for 5th that price. That is just as if not more performant.
The majority of the cost of this system is the drives and while buying refurbs for home use is nice. A. They are not readily available in the 18tb flavor B. They don't have as extensive of warranty C. Scraping the internet to save a few thousand dollars is not really worth the time in all environments.
Plus in the grand scheme of things this is one of the more inexpensive solutions out there since there is no recurring cost to use the system.
@@tw3145wallenstein I mean you can get similar hardware at a large discount not counting the drives. 9 k system cost here you about 3k used market.. But yes of course no warranty or support.