I'm behind on my RUclips consumption, so I'm finally getting caught up on your stuff. Thanks for the shoutout! Not sure if I'm the best place to go for complex ZFS questions though 😅
@@Euathlus1985 This one is running the Q32 Ampere Altra, you can only find it on eBay right now. The Q64 can be bought from NewEgg in a motherboard bundle, but other models can be found too, like from ADLINK.
@@JeffGeerling I've got a Dell PowerEdge R32, that 1U. Interestingly, when only one of two PSUs is plugged in, the HDD in my UDM-Pro is louder than the fans in that R310 :D
Bro, I sleep next to my HPE DL360pGen8 1U server, and that thing is LOUD, I had to go all in on flashing a SSH enabled custom ILO4 firmware so I can artificially limit how high the fans can go
Maybe I'm alone in this, but I like a little bit of fan noise in a "haha, the computer is thinking" kinda way, same with hard drive clicks and whirring.
I think your problem with the ssd’s is less the sata interfaces fault and more the fact that they are QVO drives which favor capacity over speed. Retest with Sata SSDs that use TLC nand instead of QLC.
Wonder if it is worth using any consumer ssds for cache on these, if they will wear out... I'd just max out ram, stay with hdd fully or go with micron pro if money allows. Or maybe have two different pools one for speed one for colder storage, duno.
@@Darkk6969 if you have a lot of time that may be true, but even decompressing a 40GB game takes 10x on QLC vs TLC. that's a lot less data throughput for only a 5-10% reduction in cost at best.
Might be correct, but damn, 8tb SSDs are just crazy expensive, the QVOs already being kinda stupidly expensive for most nas applications.... But the step to the next other 8TB ssd, not even mentioning TLC, thats insane once again... To me honestly, for performance alone its not worth switching my 8tb hdds for ssds, only power draw and noise would be a factor, but even here in germany with our crazy high power costs, they'd need to. Run above 10 years 24/7 until the power costs would weigh up the price difference 😅
The Qvo SSDs are dead slow when writing more than ~20GB at once. The use slc caching but after the buffer is full, they fall back to qlc write speeds and in my case theese were down to 100-150MB/s, thats why the log device didn't help your HDDs in any way.
Blades close to the openings have the same effect used in old air attack warning sound devices: they used a rotating drum with openings that crossed matching openings on the outer shell - every time they matched it produced something similar to a small explosion each.
Okay. People wanting a dead silent, or as close to dead silent as possible, I can respect that. But dead silence would drive me nuts. Having a small amount of fan noise keeps me sane.
As someone who likes to listen to music on his rig (a lot), I quite like my silence. I can appreciate that tinnitus sufferers might see this differently though.
8:33 Just noticed: It seems like the CPU cooler is hanging a little. When the motherboard heats up, it will bend ever so slightly (problem is much worse with hanging GPUs). This is of course no problem if the case is installed in horizontal position, but if it were in vertical for daily operation, I would mount a thing string to the top of the case to take some of the load off the cooling mount point. Maybe not necessary, but just a precaution. Better stable, than experiencing weird problems down the line.
Great video, love your homelab/linux/ansible content. keep it up, and thanks for your ansible playbooks you've written. has helped more than you know :)
those electricity prices in sweden shown on the map is not really what we pay, just to clarify. Its what my electricity provider pays, and then we add their margin, then we add a tax on every kwh, then we had grid cost, then we add fuse cost, then we add vat over the tax, and voila.. I pay €0.27 / kWh instead, so effiency is really important.
Probably could get about 90% of the same performance from one of Noctua's less expensive 120mm fans too, but since they offered these models, I couldn't refuse!
11:16 did you like just dump the ID's from ssh? I've always found ansible a bit clunky when managing remote storage because of having to dump a set ID's for each server. I usually opt to set up storage manually lol.
the spacers work because now you dont have the fan blades passing by the grill so closely, wich caused turbulence before, main reason why i usually cut out the fan grilles on my mashines as it completely removes all noise
Try reading the ZFS book by Michael W. Lucas and Allan Jude. They are a bit older now and are more for the BSD ZFS but the info is still sound and i learned a hell of a lot from them.
My "server" is a Ryzen 5 3600 with 32GB DDR4 running at 2666, and the shared drive is just a USB 3 external hard drive. It's actually just a repurposed MyBook, so I want to keep it external in case I ever need it on the go (you know, like the intended use case :p). I've undervolted the CPU to run around 45 watts at idle, and I let the hard drive spin down when not in use, which is most of the time. Benchmarking this was difficult, but I wanted to know the idle power draw and the power draw serving files at 1Gbps, and that was my goal for efficiency. So, I really like what you say about efficiency meaning something different for everyone. Maybe one day I'll go ARM, who knows?
If you want to shave off a few more watts at idle, look into AMD's monolithic jobs - 4500, 4600G, 4650G, 5500, 5600G. (If memory serves, there's not much in it between a 3600 and 4500/4600G performance-wise, and neither of them are expensive.) Make sure the BIOS option "Lower Power S0 Idle Capability" is turned on, as well as all C-states, PCIe ASPM etc. You can check for ASPM among PCIe devices with: lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
I’ve been running an old i5 6500 with 8gigs of ram as a Minecraft server and was surprised to see the thing only use like 20-25W on idle. Ryzen, especially their chipset based cpus are just not great when it comes to idle efficiency I think. Whole different story at load though.
@@Keklient 20-25 W actually isn't even a particular achievement for a Skylake system... Sandy / Ivy Bridge will do that already. There's a number of Skylake era office machines that'll idle at around 10 W, and things can be pushed to around 6 W with a PicoPSU and the right board. 6th to 9th gen is probably the best era for low idle power overall, although 10th gen can still be very good. Your best bet generally is (basic) ITX boards from manufacturers that are somewhat competent in terms of BIOS power saving features, or prebuilt SFF-ish systems.
I thank the google algorithms that made me discover you. I think your contents are among the best i've seen : dense and incredibly well explained. If not already the case , you would do a great teacher.
Does the PWM control take into account drive temps? They can get toasty just by idle spinning. There’s some debate about ideal temps though. Some data suggests that some warmth isn’t bad but there’s definitely a ceiling to that.
Right now they don't, it's currently setting it based off CPU temps. I have been talking with ASRock Rack about their fan control setup-right now it's a little rough around the edges (I didn't cover it in this video because it seems like that could improve).
another note about the SLOG, if you don't have a dedicated log device, zfs just uses your pool to store the data without being explicit about it. yet another quirky thing is that the log is never actually read from unless you're restarting from an unclean shutdown / export of the pool. if you aren't tired of hearing quirky things: if you only use zvols, the log is useless and it's best to totally disable it for your pool since the filesystem you have on your zvol would be in charge of handling a journal or log.
As you found, SLOG only helps with random writes, and only if they're synchronous; ZFS is copy-on-write, so as long as the transfers are asynchronous and thus the storage layer doesn't have to wait until they're written to disk to return a response to the caller, the performance of random writes basically turns into that of sequential writes (assuming sufficient RAM to buffer the transfers). You show 600MB/s for async random writes for just the hard drive array and 243MB/s for synchronous random writes with the SLOG (since they're just SATA) but it would have been interesting to see the random write performance of just the hard drives with sync forced on (which I would assume would be substantially lower than your result with SLOG). Obviously this is not the scenario for async writes (which is most fileserver-style transfers like SMB) but sync 4k random writes are a very important metric for databases, hosting VMs on the pool, or in some cases mounted block devices like iSCSI depending on your stack and settings. If you can still find them, any of the old Intel Optane 16GB/32GB/64GB M.2 sticks make for great SLOGs as they provide extremely high IOPS and have excellent write endurance, and more predictable performance under load than regular SSDs (especially compared to something like QLC).
Did you enable ASPM at the Bios level? My system was using 130 watts at idle, after I enabled that it brought it down to 55 watts while running windows. If I load ubuntu it brought it down even further to 35 watts at idle.
In regards to the PSU you can get ones that has silent mode. I think using a PSU made for higher amounts of wattage will also lower the need for cooling. Then you can also look into its efficiency curve to find what delivers the power you are using as efficiently as possible. Edit: I am not sure if the bad efficiency at low wattage can push a PSU to need more cooling. I did just think it might be an issue with the sub 200 watt you are using on a 750 watt PSU. But I do think PSUs with silent mode stay silent as long as the wattage isn't too high.
if you want a rule of thumb to go by... the bigger the heat exchanger/radiator the more watts something consumes, if something looks like it has a chonk of metal on a piddly board, it's not going to be efficient with it's power draw, if it has parts to move those will also take a lot of power. try to avoid things that produces heat as a biproduct, even if it's slower.
I've been running Ubuntu on ZFS on RPi4 for over 2 years, booting off of USB mirrored, thanks to a combination of a tutorial written by you and another tutorial from somewhere on the internet.
You better grab some Optane drives while they are available. I use mirrored 905p's, to host both ZIL/SLOG, and also a Special Metadata vdev. The ridiculously low latency on Optane makes it possible to host the vdev and ZIL/SLOG without issues on the same drives.
in my custom build i did a few years back, still running to this day: Tyan S7002 2x Xeon 5500's. 256GB DDR3-ECC Ram, 14HDD's, 8 SSD's. dual PSU (both as single for additional molex) - noctua coolers and 180mm case fans Total package under load? (100%) :: 430-450w, idle HDD's Just as a yardstick :)
Definitely interested in what you do for your off-site target. I was checking my subscriptions, and saw sticker shock from my offsite backup bill via Backblaze B2, and am now in the market to see if I can hit a reasonable capacity / reliability target at 1-2 years ROI on what I was paying Backblaze.
Check out how I'm currently using rclone + Amazon Glacier Deep Archive. Retrieval costs are high, but it's a lot cheaper (if using Deep Archive) than BB2: www.jeffgeerling.com/blog/2021/my-backup-plan Will be reworking that with the new studio soon!
I mostly run Rocky Linux but I won't hold it against Jeff for switching to Ubuntu :) I've never tried to run it on ARM and I'm the kind of guy who likes the path of least resistance so I understand. He did set SELinix to Permissive which makes Steve Grubb cry. :) I run in Enforcing but I have a lot of SELinix experience and I understand how mysterious it seems, but running in Enforcing is not that difficult these days - except when it's not. SELinix really is a great thing for servers. Great video on server efficency!
You may also want to test the power measurement tools you use. The Kill-a-Watt is probably measuring actual Watts, but most of those meters do not take the power-factor (cos phi) into account and thus only measure VA and not actual power your power company will bill you. Most high-end power supplies in your PC will have some power factor correction circuits, so it should try to have voltage and current in phase as much as possible. But those circuits will often not work optimal when using the power supply at the lower end of the amount of power it can deliver. Also power meters struggle in the low-end of the measurement range. One of the easiest ways to help power meters to give a more accurate measured power consumption is to add some resistive load like an old fashioned light bulb with tungsten glow wire. This will have a perfect cos-phi of 1 and adds some base load so your meter doesn't have to struggle near the low-end of its measurement range. If you try to measure "idle" power consumption, you can easily be fooled by those measurements as the power supply probably does have a truly bad power factor at the low power delivery end of its range. This can make a huge difference as the power factor for idle can be easily less than 0.25 which will put your measurements off by a factor of 4 or more. So as always: Measuring is knowing, if you know what you're measuring. (A Dutch expression: "Meten is weten als je weet wat je meet")
I've been testing the Thirdreality plugs a bit, and they do measure power factor (among the other metrics), so I have pretty good confidence with them. I do plan on getting a nicer bench power measurement tool at some point that's calibrated and could be used to test the accuracy of various meters... but all in good time!
@@JeffGeerling Just as a simple test, you could also try to measure some devices with really poor power factor like LED lamps. Just plug in your measurement devices in series and check whatever they measure with the same load and then swap the order of the meters to make sure you can average-out the power consumption of the meters. Then also measure some resistive load like a heating element (e.g. an old soldering iron) Just to make sure the soldering iron will consume a constant amount of power and is not turning off as it reached the target temperature, you can place the tip in a large vise or something else to dissipate the heat. This doesn't require any pricy new equipment and maybe is a nice new topic for some video :)
I would not use a HUB nor a Y splitter - usually server grade main-boards got plenty of fan headers - which then allows you to monitor each of them individually.
Heh apparently that is correct for most viewers, the new thumbnail has a bit higher CTR! Wish I was in RUclips's A/B testing program, that way they can suss it out automatically for me.
The out of band management can use up to 9-10w (see with some supermicro boards). If you want to save on your electricity bill then go for an vPro AMT board - they use around 3-5w.
On QNAP and Synology NAS units it is easy to setup the HDD's to automatically spin down when not in use or the entire NAS to hibernate based on a schedule. Assuming, the setup is similarly simple on your OS you should be able to get up to 40% less power usage (over time) by configuring those settings as needed.
I've considered this-the downside is enterprise HDDs aren't really meant to spin up and spin down a lot, and so the settings for that should be chosen very carefully. Most vendors I've spoken to recommend not spinning down the drives if you want the longest service life.
What he said. I have my drives set to spin down in my Unraid setup if they go untouched for 48hrs or more this keeps the regularly accessed drives spinning pretty much non-stop unless the whole house goes away for a week but does let drives that are only used for cold storage spin down. This takes some careful planning of file organizing but I also tend to replace my drives ahead of schedule and a raid isn't a backup anyway guys.
Still working all that out-probably later this year or sometime next year for a nice antenna setup... have to wait for my Dad to be able to help more on that project :)
Very nice video! What _really_ peaked my interest was the setting up of ZFS using Ansible. Do you have any generic playbook examples or something to offer on that?
Bonus points for the title. The guys that make hl15 should definitely if not offer noctua fans as a option, at least work on their fan noise specs. I'm certain with a better choice of coolers (not necessarily noctua - bequiet and many others offer lower noise fans), a built-in/addon fan hub and some spacers they could provide much better ux
I bet some properly sized resistors and 3d printed spacers take care of the fan noise on the stock fans for much less than $200 worth of noctua fans if that's more than someone wants to spend.
@@chublez most likely so, though quieter fans would also help - there's a bunch of those that are not as expensive as noctua. What pains me is that a person who's ready to pay a boatload of cash for the hl15 has to contend with jerry-rigging stuff which at this price point should be an option offered by the manufacturer
@@chublez Don't suggest resistors. They should include a proper fan controller. It could be as simple as taking a PWM signal from the motherboard and using 4 wire fans. Or they could use thermal sensors which might be better for keeping hard drives cool.
Hi Jeff, can you do one video about reducing the power usage of rpi5. Like under volting,CPU gov, turning off modules etc. mine is drawing 9W with two USB connected ssds.
Yeah, just a plastic door with a latch and not too keen on the open back, must be for ventilation but I would rather some sort of grill there instead. I got the Mediasonic enclosure now (took some time importing from the US), HF7-SU31C, 4 bays and 10G USB.
The thing with the QVO they are not only SATA but also QLC SSDs where one Cell stores 4-Bit which especially in write will hit a bottleneck When the Cache on the drive is saturated.
True-one disadvantage of server motherboards is they have a lot of always-on functionality (like the BMC and any extra NICs) that you can't disable, so it's just burning a few watts here and there all day.
@@JeffGeerlinglol, the way you benchmarked them you sounded like you were expecting TLC NAND performance and instead blamed them for being “consumer ssds”.
@@carbongrip2108 I was expecting nothing, really-a lot of forums just say "don't use consumer SSDs", but never offer any evidence they're better or worse, performance-wise. So I thought I'd test them to be sure. The 8TB versions aren't that bad, honestly. I wonder if they have a larger cache due to the massive size of the SSD (relative to the QVO 1/2/4 TB models).
Do not discount the effect of unpatched drivers and firmware on all devices in the chain. I support more memory up front, so that less code gets executed as well as less device wait. Lastly you may get better throughput if you try using affinity sets between network interrupt driver and NFS(or samba) also for storage controllers. Interesting results can appear as I've seen in the past, depending on the size of the CPU L2/3 cache.
You didn't address the static pressure of the fans and if you are getting the same HDD cooling before and after. I suspect with the swap you did there probably isn't a big difference but on large server chassis without the dual fan walls, you need to take static pressure into the equation.
If you can control the speed, you are perfectly fine to use typical 120mm server fans. Noctua only reduces tiny bit of noise with their own trickery, the bulk of noise reduction comes just from low RPM. Alternatively you can go for some $10-15 corsair fans which are silent. The spacer brackets are an good idea , but no need to buy pricy noctua stuff, can easily 3D print some.
For speeding up thing in that case its better to use big SSDs for "Metadata", cause metadata pool can actually hold all small files too. That will leave hdd for big sequential transfers and ssd for more random stuff. btw, ZFS will kill your QVO drives very fast.
My rack sits about 8 feet from my desk. Everything in my rack is 2u or 4u (except my fan modded 1u switch) to make sure that I can cool them and keep them quiet.
Hey Jeff, molex splitters do exist, so should be easy to pick one up and then get a molex to SATA adaptor and use the fan hub like originally intended :)
Can you do a video on the sdm raspberry pi image configure tool? I found it on GitHub today and was surprised you have no video of it (that I could find). It gives me ansible/terraform vibes for pi as card burning which just sounds SWEET
Unfortunately Arm support is still spotty outside of a lot of the datacenter stuff, where Ampere and other Arm companies have invested a ton of time and resources. We'll get there!
Hmm, as for a home NAS I'm planning to go with an (upstream first) ARM SBC. It's gonna run Guix and the filesystem is gonna be either btrfs or the new and shiny bcachefs. Wait… RAID-10 gives you so little? Like halving the capacity for such little safety? Why would anyone use it over erasure codes now that they are available to us? (maybe it's about the performance. Reed-Solomon isn't the fastest I believe, so that's probably it)
Exactly-it's still slightly faster than RAIDZ2 if you just have four drives. Or possibly more, there's a limit though, and it depends on your drive speeds too, whether it's worth it or not.
How does the 6 cm4 Super c6 score on the top 500 supercomputer test and how does it compare to one cm4 on the base io board? Just for giggles and laughs. How does it compare energy wise to your nas?
Great Video Jeff. In terms of efficiency you should try changing the power supply to an HDPLEX 500W GaN ATX Power Supply. Since the max efficiency range of the power supply your using is higher then what your pulling it can be costing you.
QVO drives read at full speed all the time. There is no speed limitations when reading for a sustained duration. The 8TB QVO writes at 184 MB/s when it exhausts its massive SLC cache. He'd have to write some 320GB before a drop to that level.
@@toseltreps1101 I don't know why you choose to spew ignorance rather than educating yourself. These aren't no-name cacheless SSDs with crappy controllers. They also only experience the "QLC downsides" well beyond what you'd likely write to them in a single session.
Hey Jeff, I’ve had good luck with Noctua fans as well. About the post fan swap PSU fan noise, have you tried the PC Power and Cooling Silencer PSU??? Just curious...
The fan connectors are 2-pin, so you can get them to plug in, but then they'd just run 100% (on 12v power) the whole time. The Noctuas would still be quieter than the CoolerGuys fans, but not as dramatically.
Thanks Jeff! I’m thinking of doing the same thing, but unlike your setup, I don’t have active CPU cooler. Would it make sense to plugin all fans on the case fan header? Or I should plug them in to the cpu fan headers. I’m not so sure if the super micro motherboard knows when to spin up and down the fan for case fan header lol.
Good question! I've tested a few different Nvidia cards (4090, 4070 Ti, and 3080 Ti), and a couple AMD and all work, but the AMD cards sometimes need driver tweaks. Those are slowly getting ironed out over time in Linux!
I'm behind on my RUclips consumption, so I'm finally getting caught up on your stuff. Thanks for the shoutout! Not sure if I'm the best place to go for complex ZFS questions though 😅
Just having done some benchmarks with actual hardware puts you in the top 5% of all ZFS users though hahaha
@@JeffGeerling Fair enough 🤷♂
@@JeffGeerling What is that ARM CPU and where can I get it?
@@Euathlus1985 This one is running the Q32 Ampere Altra, you can only find it on eBay right now. The Q64 can be bought from NewEgg in a motherboard bundle, but other models can be found too, like from ADLINK.
Chapter two is named with great precision
I was hoping this'd be the first comment, and it was when I checked. Have you looked at chapter three?
Great video and testing! Also, thanks for the shoutout! Would love to send my ZFS snapshots somewhere, let's trade disk space!
Heh... there's a secret project there-hopefully we can get it going later this year.
@@JeffGeerling subscribed!
@@TechnoTim can't wait for that colab :)
Working on my 1U 2-Node server today. **RRRRRRRREEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE**
lol I can hear that... and the ringing in my ears after forgetting hearing protection!
@@JeffGeerling I've got a Dell PowerEdge R32, that 1U. Interestingly, when only one of two PSUs is plugged in, the HDD in my UDM-Pro is louder than the fans in that R310 :D
You mentioned all of the channels I am subscribed to!
The end of the video was BRILLIANT! 😀
Hehe I understood that reference.
brother I sleep next to a 1U Sun Fire server what even is noise
Bro, I sleep next to my HPE DL360pGen8 1U server, and that thing is LOUD, I had to go all in on flashing a SSH enabled custom ILO4 firmware so I can artificially limit how high the fans can go
Impressed with your diligent level of detail on the testing.
Thank You for the extremely detailed review and dedication you put into your videos
Great video Jeff. I get my HL15 next week and this really helps in planning my upgrades.
I'm going to upgrade mine to 25 Gbps Ethernet this week too! Also added a couple U.2 NVMe drives, need to post some updates sometime...
Fan noise? Bad sound.
Jeff Geerling? Good sound.
Jeff saying nactua instead of noctua is another sound altogether
Maybe I'm alone in this, but I like a little bit of fan noise in a "haha, the computer is thinking" kinda way, same with hard drive clicks and whirring.
@@dafoex it's definitely a nostalgic sound!
I think your problem with the ssd’s is less the sata interfaces fault and more the fact that they are QVO drives which favor capacity over speed. Retest with Sata SSDs that use TLC nand instead of QLC.
Wonder if it is worth using any consumer ssds for cache on these, if they will wear out... I'd just max out ram, stay with hdd fully or go with micron pro if money allows. Or maybe have two different pools one for speed one for colder storage, duno.
indeed, QLC is trash
I would never use QLC for constant writes. It's fine for "archive" or cold data.
@@Darkk6969 if you have a lot of time that may be true, but even decompressing a 40GB game takes 10x on QLC vs TLC. that's a lot less data throughput for only a 5-10% reduction in cost at best.
Might be correct, but damn, 8tb SSDs are just crazy expensive, the QVOs already being kinda stupidly expensive for most nas applications.... But the step to the next other 8TB ssd, not even mentioning TLC, thats insane once again...
To me honestly, for performance alone its not worth switching my 8tb hdds for ssds, only power draw and noise would be a factor, but even here in germany with our crazy high power costs, they'd need to. Run above 10 years 24/7 until the power costs would weigh up the price difference 😅
The Qvo SSDs are dead slow when writing more than ~20GB at once. The use slc caching but after the buffer is full, they fall back to qlc write speeds and in my case theese were down to 100-150MB/s, thats why the log device didn't help your HDDs in any way.
Blades close to the openings have the same effect used in old air attack warning sound devices: they used a rotating drum with openings that crossed matching openings on the outer shell - every time they matched it produced something similar to a small explosion each.
Okay. People wanting a dead silent, or as close to dead silent as possible, I can respect that. But dead silence would drive me nuts. Having a small amount of fan noise keeps me sane.
A worthy option :)
I like the silence, but under 30 dB would get to me
My office building has highly efficient air conditioning which doesn’t use fans. It’s so quiet they had to install a sound system to add white noise.
As someone who likes to listen to music on his rig (a lot), I quite like my silence. I can appreciate that tinnitus sufferers might see this differently though.
At 3:19, the Only Fans Joke, I hope it was meant as a joke because it was a good one.
he has a bin in his shop labeled "Only Fans" with a bunch of XXX and fire emojis, lol.
A clap and a half to you Jeff for your reference and for the quality in depth content. Keep up the good work!
8:33 Just noticed: It seems like the CPU cooler is hanging a little. When the motherboard heats up, it will bend ever so slightly (problem is much worse with hanging GPUs).
This is of course no problem if the case is installed in horizontal position, but if it were in vertical for daily operation, I would mount a thing string to the top of the case to take some of the load off the cooling mount point. Maybe not necessary, but just a precaution. Better stable, than experiencing weird problems down the line.
The end result is absolutely beautiful.
You should use heat shrink for the not used 12V fan connectors, the tape just get loose after some time.
Thumbnail, just fantastic.
he changed it :(
@@arch1107 Such a scheme :)
Electricity prices @7:35 are what market maker and big company's have to pay. Citizens often have to pay five times more :(
Great video, love your homelab/linux/ansible content. keep it up, and thanks for your ansible playbooks you've written. has helped more than you know :)
Been running a NAS using ZFS on Ubuntu, on a Raspi 4 since shortly after the Raspi 4 came out. It‘s been Just Working all that time.
those electricity prices in sweden shown on the map is not really what we pay, just to clarify. Its what my electricity provider pays, and then we add their margin, then we add a tax on every kwh, then we had grid cost, then we add fuse cost, then we add vat over the tax, and voila.. I pay €0.27 / kWh instead, so effiency is really important.
Nobody: ...
Jeff: "01:04 - Only fans"
Wants quieter fans, gets talked into fans with all the bells and whistles.
Probably could get about 90% of the same performance from one of Noctua's less expensive 120mm fans too, but since they offered these models, I couldn't refuse!
Also, was that a MatPat reference? 😂
That's just a theory...
@@JeffGeerling Hahaha MatPat lives on in our hearts I guess...
Hey Jeff! Thanks for the video! I see you in the comments so I grabbed the chance to say thanks for all the videos you made. Really helpful
You're quite welcome, Mr. Trash :)
@@JeffGeerling mAh hart :)
7:34 where did they pull those numbers from? or did they accidentally put additional 0 in the price?
Really? Nobody else heard the only fan desk?
Jeff you just earned your first subscriber on OF, see you there 🎉🎉
11:16 did you like just dump the ID's from ssh? I've always found ansible a bit clunky when managing remote storage because of having to dump a set ID's for each server. I usually opt to set up storage manually lol.
Yep, just grabbed the whole directory listing, wish there were an easier way :)
the spacers work because now you dont have the fan blades passing by the grill so closely, wich caused turbulence before, main reason why i usually cut out the fan grilles on my mashines as it completely removes all noise
Try reading the ZFS book by Michael W. Lucas and Allan Jude. They are a bit older now and are more for the BSD ZFS but the info is still sound and i learned a hell of a lot from them.
Thank you for sharing this with us. Noctua may be expensive, but they still reign supreme.
Just a tip for anyone running arm, you can use box86 or box64 to run x86 software on arm. This won't work for everything but it is very useful.
Wolfgang will really appreciate this video
His content's the gold standard for tracking efficiency (IMO).
@@JeffGeerling true, you two were my inspiration to start tinkering with PCs and server hardware 🙏
@@JeffGeerling Collab whennn...?
Looks like I need to order some Noctua spacer rings.
My "server" is a Ryzen 5 3600 with 32GB DDR4 running at 2666, and the shared drive is just a USB 3 external hard drive. It's actually just a repurposed MyBook, so I want to keep it external in case I ever need it on the go (you know, like the intended use case :p). I've undervolted the CPU to run around 45 watts at idle, and I let the hard drive spin down when not in use, which is most of the time. Benchmarking this was difficult, but I wanted to know the idle power draw and the power draw serving files at 1Gbps, and that was my goal for efficiency. So, I really like what you say about efficiency meaning something different for everyone. Maybe one day I'll go ARM, who knows?
If you want to shave off a few more watts at idle, look into AMD's monolithic jobs - 4500, 4600G, 4650G, 5500, 5600G. (If memory serves, there's not much in it between a 3600 and 4500/4600G performance-wise, and neither of them are expensive.) Make sure the BIOS option "Lower Power S0 Idle Capability" is turned on, as well as all C-states, PCIe ASPM etc. You can check for ASPM among PCIe devices with:
lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
I’ve been running an old i5 6500 with 8gigs of ram as a Minecraft server and was surprised to see the thing only use like 20-25W on idle. Ryzen, especially their chipset based cpus are just not great when it comes to idle efficiency I think. Whole different story at load though.
@@Keklient 20-25 W actually isn't even a particular achievement for a Skylake system... Sandy / Ivy Bridge will do that already. There's a number of Skylake era office machines that'll idle at around 10 W, and things can be pushed to around 6 W with a PicoPSU and the right board. 6th to 9th gen is probably the best era for low idle power overall, although 10th gen can still be very good. Your best bet generally is (basic) ITX boards from manufacturers that are somewhat competent in terms of BIOS power saving features, or prebuilt SFF-ish systems.
I thank the google algorithms that made me discover you. I think your contents are among the best i've seen : dense and incredibly well explained. If not already the case , you would do a great teacher.
Does the PWM control take into account drive temps? They can get toasty just by idle spinning. There’s some debate about ideal temps though. Some data suggests that some warmth isn’t bad but there’s definitely a ceiling to that.
Right now they don't, it's currently setting it based off CPU temps.
I have been talking with ASRock Rack about their fan control setup-right now it's a little rough around the edges (I didn't cover it in this video because it seems like that could improve).
another note about the SLOG, if you don't have a dedicated log device, zfs just uses your pool to store the data without being explicit about it. yet another quirky thing is that the log is never actually read from unless you're restarting from an unclean shutdown / export of the pool. if you aren't tired of hearing quirky things: if you only use zvols, the log is useless and it's best to totally disable it for your pool since the filesystem you have on your zvol would be in charge of handling a journal or log.
As you found, SLOG only helps with random writes, and only if they're synchronous; ZFS is copy-on-write, so as long as the transfers are asynchronous and thus the storage layer doesn't have to wait until they're written to disk to return a response to the caller, the performance of random writes basically turns into that of sequential writes (assuming sufficient RAM to buffer the transfers).
You show 600MB/s for async random writes for just the hard drive array and 243MB/s for synchronous random writes with the SLOG (since they're just SATA) but it would have been interesting to see the random write performance of just the hard drives with sync forced on (which I would assume would be substantially lower than your result with SLOG).
Obviously this is not the scenario for async writes (which is most fileserver-style transfers like SMB) but sync 4k random writes are a very important metric for databases, hosting VMs on the pool, or in some cases mounted block devices like iSCSI depending on your stack and settings.
If you can still find them, any of the old Intel Optane 16GB/32GB/64GB M.2 sticks make for great SLOGs as they provide extremely high IOPS and have excellent write endurance, and more predictable performance under load than regular SSDs (especially compared to something like QLC).
Did you enable ASPM at the Bios level? My system was using 130 watts at idle, after I enabled that it brought it down to 55 watts while running windows. If I load ubuntu it brought it down even further to 35 watts at idle.
In regards to the PSU you can get ones that has silent mode.
I think using a PSU made for higher amounts of wattage will also lower the need for cooling. Then you can also look into its efficiency curve to find what delivers the power you are using as efficiently as possible.
Edit: I am not sure if the bad efficiency at low wattage can push a PSU to need more cooling. I did just think it might be an issue with the sub 200 watt you are using on a 750 watt PSU. But I do think PSUs with silent mode stay silent as long as the wattage isn't too high.
if you want a rule of thumb to go by... the bigger the heat exchanger/radiator the more watts something consumes, if something looks like it has a chonk of metal on a piddly board, it's not going to be efficient with it's power draw, if it has parts to move those will also take a lot of power. try to avoid things that produces heat as a biproduct, even if it's slower.
I've been running Ubuntu on ZFS on RPi4 for over 2 years, booting off of USB mirrored, thanks to a combination of a tutorial written by you and another tutorial from somewhere on the internet.
Jeff, we’ll never forget the original thumbnail.
Heh, unfortunately it was performing poorly :(
Always a tradeoff between being cheeky or not. Sometimes it works, sometimes it makes RUclips sad!
15:50 did you seriously pull a game theory on us...
You better grab some Optane drives while they are available. I use mirrored 905p's, to host both ZIL/SLOG, and also a Special Metadata vdev. The ridiculously low latency on Optane makes it possible to host the vdev and ZIL/SLOG without issues on the same drives.
When you go for a special metadata device (and potentially small-file support) do you pretty much need 2-way/3-way mirrored configs?
@@MikeKirkReloaded Yes. If you lose a special metadata vdev you lose the entire pool.
Always fun! love the channel! Wish you the best!
in my custom build i did a few years back, still running to this day:
Tyan S7002 2x Xeon 5500's. 256GB DDR3-ECC Ram, 14HDD's, 8 SSD's. dual PSU (both as single for additional molex) - noctua coolers and 180mm case fans
Total package under load? (100%) :: 430-450w, idle HDD's
Just as a yardstick :)
Definitely interested in what you do for your off-site target. I was checking my subscriptions, and saw sticker shock from my offsite backup bill via Backblaze B2, and am now in the market to see if I can hit a reasonable capacity / reliability target at 1-2 years ROI on what I was paying Backblaze.
Check out how I'm currently using rclone + Amazon Glacier Deep Archive. Retrieval costs are high, but it's a lot cheaper (if using Deep Archive) than BB2: www.jeffgeerling.com/blog/2021/my-backup-plan
Will be reworking that with the new studio soon!
8:41 oh wow that's a nice and crips shot of the screen!
I mostly run Rocky Linux but I won't hold it against Jeff for switching to Ubuntu :) I've never tried to run it on ARM and I'm the kind of guy who likes the path of least resistance so I understand.
He did set SELinix to Permissive which makes Steve Grubb cry. :) I run in Enforcing but I have a lot of SELinix experience and I understand how mysterious it seems, but running in Enforcing is not that difficult these days - except when it's not. SELinix really is a great thing for servers.
Great video on server efficency!
Heh I have personally witnessed that crying a few times. Though some sysadmins who aren't familiar with EL tend to cry when it's enabled too haha.
@erling The ways of SELinux are mysterious...BTW Steve G is nice a guy
Nice to see the ARM approach for something different.
You may also want to test the power measurement tools you use.
The Kill-a-Watt is probably measuring actual Watts, but most of those meters do not take the power-factor (cos phi) into account and thus only measure VA and not actual power your power company will bill you.
Most high-end power supplies in your PC will have some power factor correction circuits, so it should try to have voltage and current in phase as much as possible. But those circuits will often not work optimal when using the power supply at the lower end of the amount of power it can deliver.
Also power meters struggle in the low-end of the measurement range.
One of the easiest ways to help power meters to give a more accurate measured power consumption is to add some resistive load like an old fashioned light bulb with tungsten glow wire.
This will have a perfect cos-phi of 1 and adds some base load so your meter doesn't have to struggle near the low-end of its measurement range.
If you try to measure "idle" power consumption, you can easily be fooled by those measurements as the power supply probably does have a truly bad power factor at the low power delivery end of its range.
This can make a huge difference as the power factor for idle can be easily less than 0.25 which will put your measurements off by a factor of 4 or more.
So as always: Measuring is knowing, if you know what you're measuring. (A Dutch expression: "Meten is weten als je weet wat je meet")
I've been testing the Thirdreality plugs a bit, and they do measure power factor (among the other metrics), so I have pretty good confidence with them. I do plan on getting a nicer bench power measurement tool at some point that's calibrated and could be used to test the accuracy of various meters... but all in good time!
@@JeffGeerling Just as a simple test, you could also try to measure some devices with really poor power factor like LED lamps. Just plug in your measurement devices in series and check whatever they measure with the same load and then swap the order of the meters to make sure you can average-out the power consumption of the meters.
Then also measure some resistive load like a heating element (e.g. an old soldering iron) Just to make sure the soldering iron will consume a constant amount of power and is not turning off as it reached the target temperature, you can place the tip in a large vise or something else to dissipate the heat.
This doesn't require any pricy new equipment and maybe is a nice new topic for some video :)
I would not use a HUB nor a Y splitter - usually server grade main-boards got plenty of fan headers - which then allows you to monitor each of them individually.
The new thumbnail worked, Jeff - I was more inclined to click when it changed :D
Heh apparently that is correct for most viewers, the new thumbnail has a bit higher CTR!
Wish I was in RUclips's A/B testing program, that way they can suss it out automatically for me.
Put headphones on for this to hear the difference between the fans and well, definitely didn't need to do that. The difference is crazy.
Yeah, and I turned up the sound on the Noctuas a bit too... still pretty much nothing.
The out of band management can use up to 9-10w (see with some supermicro boards). If you want to save on your electricity bill then go for an vPro AMT board - they use around 3-5w.
This board uses 6-7W for BMC, it's not great, not terrible :)
@@JeffGeerling which is roughly the same as my ASRock Rack E3C236D2I, I still wonder how Supermicro need more power to power the same chipset.
My homelab consists of a HA Green, a PI 4, and a QNAP 9 bay nas. in total it's about 50w
On QNAP and Synology NAS units it is easy to setup the HDD's to automatically spin down when not in use or the entire NAS to hibernate based on a schedule. Assuming, the setup is similarly simple on your OS you should be able to get up to 40% less power usage (over time) by configuring those settings as needed.
I've considered this-the downside is enterprise HDDs aren't really meant to spin up and spin down a lot, and so the settings for that should be chosen very carefully. Most vendors I've spoken to recommend not spinning down the drives if you want the longest service life.
@@JeffGeerling interesting… I guess that makes sense!
What he said. I have my drives set to spin down in my Unraid setup if they go untouched for 48hrs or more this keeps the regularly accessed drives spinning pretty much non-stop unless the whole house goes away for a week but does let drives that are only used for cold storage spin down. This takes some careful planning of file organizing but I also tend to replace my drives ahead of schedule and a raid isn't a backup anyway guys.
Just to let you know Jeff... You are one of the greats, yourself.
Good Video ,hows your Ham Radio stuff going got any more radios or a nice big beam antenna above your new work place.
Still working all that out-probably later this year or sometime next year for a nice antenna setup... have to wait for my Dad to be able to help more on that project :)
7:29 9ct/kWh??? Our prices here in Germany are more than 0,40€/kWh! 😢
Very nice video! What _really_ peaked my interest was the setting up of ZFS using Ansible. Do you have any generic playbook examples or something to offer on that?
7:35 well if electricity was only 0.085€ in Italy I would already have a freaking datacenter in my garage
Could you add a link to the dB noise monitor you use please?
Reed Instruments R8050: amzn.to/3P0mwrX :) - I'll add to the description too!
The Grayskull e75 and e150 DevKits are available for purchase now at $599 and $799, respectively.
Bonus points for the title.
The guys that make hl15 should definitely if not offer noctua fans as a option, at least work on their fan noise specs. I'm certain with a better choice of coolers (not necessarily noctua - bequiet and many others offer lower noise fans), a built-in/addon fan hub and some spacers they could provide much better ux
Yeah: for people who've committed to HL15 pricing... having a $150 uplift fee for Noctua fans probably wouldn't make them blink: I'd go for it.
I bet some properly sized resistors and 3d printed spacers take care of the fan noise on the stock fans for much less than $200 worth of noctua fans if that's more than someone wants to spend.
@@chublez most likely so, though quieter fans would also help - there's a bunch of those that are not as expensive as noctua.
What pains me is that a person who's ready to pay a boatload of cash for the hl15 has to contend with jerry-rigging stuff which at this price point should be an option offered by the manufacturer
@@chublez Don't suggest resistors. They should include a proper fan controller. It could be as simple as taking a PWM signal from the motherboard and using 4 wire fans. Or they could use thermal sensors which might be better for keeping hard drives cool.
Hi Jeff, can you do one video about reducing the power usage of rpi5. Like under volting,CPU gov, turning off modules etc. mine is drawing 9W with two USB connected ssds.
Hey, we got the same Sabrent drive dock! Currently also using it to manually back-up my NAS array 😂
Haha it is great for that. Though sometimes I feel like the drive door could use a little better attachment!
Yeah, just a plastic door with a latch and not too keen on the open back, must be for ventilation but I would rather some sort of grill there instead.
I got the Mediasonic enclosure now (took some time importing from the US), HF7-SU31C, 4 bays and 10G USB.
The thing with the QVO they are not only SATA but also QLC SSDs where one Cell stores 4-Bit which especially in write will hit a bottleneck When the Cache on the drive is saturated.
Even idle/not in use the onboard 10G-base-T ports will use quite a lot
True-one disadvantage of server motherboards is they have a lot of always-on functionality (like the BMC and any extra NICs) that you can't disable, so it's just burning a few watts here and there all day.
@@JeffGeerlingRebecca can turn them off via nvparam
The NF-A12x25's are great. I have 3 on my gaming PC and it's whisper quiet.
Why not use a Silvestone RM43-320-RS? It holds 20 drives and is far more of a polished solution. What am I missing?
Using SATA QLC SSDs for ZIL and L2ARC - a bit masochistic?
Just wanted to see how they do :)
@@JeffGeerlinglol, the way you benchmarked them you sounded like you were expecting TLC NAND performance and instead blamed them for being “consumer ssds”.
@@carbongrip2108 I was expecting nothing, really-a lot of forums just say "don't use consumer SSDs", but never offer any evidence they're better or worse, performance-wise. So I thought I'd test them to be sure.
The 8TB versions aren't that bad, honestly. I wonder if they have a larger cache due to the massive size of the SSD (relative to the QVO 1/2/4 TB models).
7:34 these power prices are more like power exchange prices at home you pay 4-5 times what is displayed there
Good to know-we have it good here in the US, power rates are sooo much lower!
@@JeffGeerling yes, in Germany i pay 0.314 € per kwh the Industrial company i work for pays around 0.15 € per kwh and they use 10+mwh per day
That map hurt me
Do not discount the effect of unpatched drivers and firmware on all devices in the chain. I support more memory up front, so that less code gets executed as well as less device wait. Lastly you may get better throughput if you try using affinity sets between network interrupt driver and NFS(or samba) also for storage controllers. Interesting results can appear as I've seen in the past, depending on the size of the CPU L2/3 cache.
You didn't address the static pressure of the fans and if you are getting the same HDD cooling before and after. I suspect with the swap you did there probably isn't a big difference but on large server chassis without the dual fan walls, you need to take static pressure into the equation.
Quite true; these fans have a little lower CFM rating but in my case drives are still running cooler than the ones in my home NAS (under 50C).
If you can control the speed, you are perfectly fine to use typical 120mm server fans.
Noctua only reduces tiny bit of noise with their own trickery, the bulk of noise reduction comes just from low RPM.
Alternatively you can go for some $10-15 corsair fans which are silent.
The spacer brackets are an good idea , but no need to buy pricy noctua stuff, can easily 3D print some.
For speeding up thing in that case its better to use big SSDs for "Metadata", cause metadata pool can actually hold all small files too. That will leave hdd for big sequential transfers and ssd for more random stuff.
btw, ZFS will kill your QVO drives very fast.
My rack sits about 8 feet from my desk. Everything in my rack is 2u or 4u (except my fan modded 1u switch) to make sure that I can cool them and keep them quiet.
Hey Jeff, molex splitters do exist, so should be easy to pick one up and then get a molex to SATA adaptor and use the fan hub like originally intended :)
Might end up doing that. I just hate those molex adapters, they scare me :D
So, could we just print some spacers for the stock fans and reduce noise that way?
A little bit, yes! I think it was optimumtech or someone who had designed some more universal spacers that work with many different fans.
Does zfs still eat your data like it used to? I had an issue with that years ago, and haven't used it since.
Well done @Jeff another great video! What about testing ceph on that same hardware?
Can you do a video on the sdm raspberry pi image configure tool? I found it on GitHub today and was surprised you have no video of it (that I could find). It gives me ansible/terraform vibes for pi as card burning which just sounds SWEET
Its unfortunate you couldn't keep using Rocky. It's suprising though that 45 Drives doesn't have ARM packages available for Rocky.
Unfortunately Arm support is still spotty outside of a lot of the datacenter stuff, where Ampere and other Arm companies have invested a ton of time and resources. We'll get there!
Hmm, as for a home NAS I'm planning to go with an (upstream first) ARM SBC. It's gonna run Guix and the filesystem is gonna be either btrfs or the new and shiny bcachefs.
Wait… RAID-10 gives you so little? Like halving the capacity for such little safety? Why would anyone use it over erasure codes now that they are available to us? (maybe it's about the performance. Reed-Solomon isn't the fastest I believe, so that's probably it)
Exactly-it's still slightly faster than RAIDZ2 if you just have four drives. Or possibly more, there's a limit though, and it depends on your drive speeds too, whether it's worth it or not.
what case u used in the first min? thats cool have many storage bay!
How does the 6 cm4 Super c6 score on the top 500 supercomputer test and how does it compare to one cm4 on the base io board? Just for giggles and laughs. How does it compare energy wise to your nas?
Check out the top500 repo README - I have a number of comparisons towards the bottom: github.com/geerlingguy/top500-benchmark/tree/master#results
0:06 : Wow!!
Great Video Jeff. In terms of efficiency you should try changing the power supply to an HDPLEX 500W GaN ATX Power Supply. Since the max efficiency range of the power supply your using is higher then what your pulling it can be costing you.
Looks like a cool PSU! It'd be neat if they offered some ATX-sized ones too, with 600/700W options.
I am hoping they come out with some larger options. In the mean time they have a sync connection to pair multiple ones. @@JeffGeerling
hi, the samsung qvo drives drop to around 150MB/s read and write during sustained use, that might be why you HDDs are faster
QLC is flat out trash
QVO drives read at full speed all the time. There is no speed limitations when reading for a sustained duration.
The 8TB QVO writes at 184 MB/s when it exhausts its massive SLC cache. He'd have to write some 320GB before a drop to that level.
@@toseltreps1101 I don't know why you choose to spew ignorance rather than educating yourself.
These aren't no-name cacheless SSDs with crappy controllers. They also only experience the "QLC downsides" well beyond what you'd likely write to them in a single session.
this video was quite nice. thanks :)
Hey Jeff, I’ve had good luck with Noctua fans as well. About the post fan swap PSU fan noise, have you tried the PC Power and Cooling Silencer PSU??? Just curious...
just curious, could we just plug the noctua fans directly into the fan connector with the HL 15 case? I thought it was also a 4 pin?
The fan connectors are 2-pin, so you can get them to plug in, but then they'd just run 100% (on 12v power) the whole time. The Noctuas would still be quieter than the CoolerGuys fans, but not as dramatically.
Thanks Jeff! I’m thinking of doing the same thing, but unlike your setup, I don’t have active CPU cooler. Would it make sense to plugin all fans on the case fan header? Or I should plug them in to the cpu fan headers. I’m not so sure if the super micro motherboard knows when to spin up and down the fan for case fan header lol.
Would love to see something like this but with Arctic P12 or P12 Max fans. How does that compare noisewise to the noctua fans at a much lower cost.
What about video card support on ARM64 with this motherboard?
Good question! I've tested a few different Nvidia cards (4090, 4070 Ti, and 3080 Ti), and a couple AMD and all work, but the AMD cards sometimes need driver tweaks. Those are slowly getting ironed out over time in Linux!