This is patrick from STH presenting another insane server at an insane cost x) I love the products presented by your channel but damn it's too expensive for me. (crying)
Presenting new and awesome servers it a great thing. If you can (afford/justify) the price is not a reviewer problem. I think that Patrick/STH was, as always, honest in his review.
I usually recommend ppl upgrade when a server is starting to hit around 80 percent load, and aim for 60 percent load with the replacement. So around 70 percent avg server load sounds just about right lol 60 percent means ur workload can scale 1.5x in the 5 year lifespan of the box. And frankly if you're expecting more growth... you probably want to upgrade every 2 or 3 years instead buying for 5.
Just a heads up, the link to the main site article is broken atm, its missing the servethehome part. the server is super interesting tho, really like it
gimme that, and a few industrial grade 30/60TB ssds for my new homelab....got it on my todo list right after an autark solar powered catamaran and before my own island, after I win a considerable jackpot..........
7:06 Incomplete review, we must know if we can put the server into a rack the right way around! 🙂 Off topic, but is STH getting a "production" 45drives HL15 to review?
Seems like they missed a trick to be learned from the 2019 Mac Pro: most of those cables are unnecessary when you design a tightly-integrated system like this.
They probably could have cut more cables if they really wanted to; but going by the manual the system doesn't actually look that tightly integrated: there are two variants based on PSU length(the 650w short PSU one is shown here; in the 1300w unit the front 2.5in bay is gone because the PSUs occupy more space and that is where the power distribution board goes instead); which means that the PSUs can't just plug into the motherboard since the attachment point isn't in the same place. The E1.S cage is also optional, which means more cabling; and the I/O on the intake fan side is all cabled because no part of the motherboard is even close to it. If they'd been willing to do a spin of the motherboard exclusive to a 650w config with E1.S populated they could have had the PSUs connect directly to it, integrated the power distribution board, and connected the cabled I/O and E1.S directly to the motherboard; and if they'd spent more money on some sort of sliding carriers, like you see on some switches, they probably could have done away with the fan power cables; but that sounds like a lot of lost versatility and extra motherboard PCB area and mechanical complexity to get rid of a few wire bundles.
So so close, yet not so close to being perfect. The PSU should be at the back, and the drives at the front. There is so few manufacturers that do it right (there are few from Supermicro, but I just gave up and built mine own from scratch instead). But really nice to see it has U.2, M.2, EDFF, and SATA. Everything doubled. Quite versatile indeed. 3 PCIe slots also great. I would prefer one of them to be OCP instead, but I guess that is not bad either. Idle 80W, oh, that is meh.
I've been all my running SFF and tower desktops back-to-front for years, only mod is to add a 2-way 2 pin dupont splitter on the front panel power button header and fit a push-button switch that I thread through a vent hole in the rear panel (hotglue it in place) I've never understood why desktop cases have all the connectivity at the back out of sight and all the styling bullcrap up front.
Silverstone tried something different with the IO at the top instead of the back but I guess it never took off. Loved my raven case that was set up that way though.
From the pictures on Asus website, it seems that the fan are pulling the air from the psu side in both the R (ears on the traditional side) and F (ears on the psu side like the version shown in the video) versions.
@@nadtz When you look at the pictures on Asus website, the fans seems to be mounted in the same orientation for both R and F versions (you can see the flow direction arrows on the fans). If that is the case, the R versions will blow hot air through the front. Definitely something you want to take into consideration when mounting those versions in a rack.
@@briceperdue7587 I probably built a few hundred 512L's once upon a time. Could knock one of those together in like 15 minutes but obviously it was nowhere near as dense as this server.
Hey Patrick, do you plan on hosting your videos elsewhere (if RUclips succeeds in driving your audience away)? I subscribed to the feed on your website a while ago. What's the point of the kitchen sink storage options opposed to different versions of the server each support 1 one of the options (besides pcie card)? I wouldn't want to stock 5 different spare parts. On the other hand, I would want at least 3 of a given type for raid 5.
4nm will help a lot. Remember though, the actual core IP on a modern CPU is actually very small compared to all the other IP for things like the fabric, memory controllers, PCIe controllers, and especially cache cells.
I really don't like that most servers have the rack ears in the front.... Sometimes I want to do maintenance on the hardware, swap a bad memory module, a bad gpu, whatever, and with most servers I have to fully take the server out of the rack for that, because the lid of the chassis can't open, because the server can't be pulled out of the rack all the way. If I could pull those servers out the back tho that wouldn't be an issue at all...
@@ServeTheHomeVideo I never understood that as it is a health and safety issue. If you don't have sliding rails then especially if the device is high up in a rack you are going to need two people for any maintenance rather than one. Even if you decide you don't care about dropping a server on yourself or doing your back in doing maintenance, dropping the server would turn simple maintenance such as replacing a faulty DIMM or fan etc. into a very bad day. Spoiling the ship for a hap'o'rth of tar frankly. At this point I take issue with vendors of Infiniband, Fibre Channel and Omnipath switches a pox on you all.
The extra storage on the back of the server is nothing new. I seen server chassis with those when shopping around for a nee server chassis for Mt custom server build.
Im not sure why you'd want *different* kinds of storage? Just sounds like it's a pain to spec initially and then a pain to upgrade, let alone hold any spares for.
Can you get an option on SFP+ because 10GBaseT is useless in a data centre. Oh and the X710 was first released in Q4 2014, so that is a nine year old chipset and note remotely recently. Clearly, you have been on the crack pipe. Had to double-check because I knew our compute nodes purchased in 2017 all have X710 NIC's.
There are a lot of folks still rocking X540-t2's and older. Also, the updated X710 10Gbase-T (check the X710-T2L for an example) is 2019. Fortville required a re-spin when it was still a 40GbE/ 10GbE SFP+ and so it went through revisions.
@@allanwind295 For lots of reasons. Firstly 99% of data centre switches are SFP+ because for starters they are cheaper and lower power so running costs are significantly lower. In a rack or even to an adjacent rack a DAC cable is lower power, and thus cheaper. If I need more than can be managed with a DAC cable well its SFP+ so I will just stick in some SR transceivers and go fibre optic. Break out DAC cables are awful to work with and super inflexible. Much better to go optical and breakout to a patch panel and get a bunch of flexibility. I have hundreds of 10Gbps ports at work and not a single 10GBaseT amount them. If someone presented me with a server that had them my first port of call would be to swap the NIC for something that was SFP+ I was also taking with the out head of networks (I work in HPC) and they don't do 10GBaseT either. There is the grand sum in the entire university of one port using a transceiver because some dum research group went out and broke procurement rules and purchased a DGX box and want to use it's 10GBaseT port. They are a problem group anyway because they didn't consult on anything before making the purchase, so we are having to do a major power upgrade to accommodate them.
This is patrick from STH presenting another insane server at an insane cost x)
I love the products presented by your channel but damn it's too expensive for me. (crying)
Haha, yah
They look so cool but $$$ makes me sad :(
just imagine the jet engine sound in your home lab and you'll feel slightly less bad ;)
Presenting new and awesome servers it a great thing. If you can (afford/justify) the price is not a reviewer problem. I think that Patrick/STH was, as always, honest in his review.
It's ok to nerd out on expensive "big boy" hardware. Car people do it all the time
@@Jaabaa_Prime are you okay Ken ?
I think you are a bit on edge for a simple harmless joke.
I hope you are doing good. 🙏🏼
Amazing!! Such a super strong 1U especially comes with Redundant power supply!
I usually recommend ppl upgrade when a server is starting to hit around 80 percent load, and aim for 60 percent load with the replacement. So around 70 percent avg server load sounds just about right lol
60 percent means ur workload can scale 1.5x in the 5 year lifespan of the box. And frankly if you're expecting more growth... you probably want to upgrade every 2 or 3 years instead buying for 5.
That makes sense.
Just a heads up, the link to the main site article is broken atm, its missing the servethehome part.
the server is super interesting tho, really like it
Eh, I'd be happy to give up the internal M.2 drives and SATA drives if it meant I could get another two external NVMe drives.
Your link to the article is messed up
the site article link in the description is broken :(
I really like it and am excited for the pricing. I hope that you can also mount it in the other way as it could be a problem in the datacenter.
You can swap the rack ears.
Sweet, the question for me is how loud is it ?
I’m going to guess *loud*
Yes loud under load.
gimme that, and a few industrial grade 30/60TB ssds for my new homelab....got it on my todo list right after an autark solar powered catamaran and before my own island, after I win a considerable jackpot..........
I totally agree on more 60TB drives!
7:06 Incomplete review, we must know if we can put the server into a rack the right way around! 🙂 Off topic, but is STH getting a "production" 45drives HL15 to review?
We might. I really want to do the 2.5" NVMe 45drives system though.
Seems like they missed a trick to be learned from the 2019 Mac Pro: most of those cables are unnecessary when you design a tightly-integrated system like this.
They probably could have cut more cables if they really wanted to; but going by the manual the system doesn't actually look that tightly integrated: there are two variants based on PSU length(the 650w short PSU one is shown here; in the 1300w unit the front 2.5in bay is gone because the PSUs occupy more space and that is where the power distribution board goes instead); which means that the PSUs can't just plug into the motherboard since the attachment point isn't in the same place.
The E1.S cage is also optional, which means more cabling; and the I/O on the intake fan side is all cabled because no part of the motherboard is even close to it.
If they'd been willing to do a spin of the motherboard exclusive to a 650w config with E1.S populated they could have had the PSUs connect directly to it, integrated the power distribution board, and connected the cabled I/O and E1.S directly to the motherboard; and if they'd spent more money on some sort of sliding carriers, like you see on some switches, they probably could have done away with the fan power cables; but that sounds like a lot of lost versatility and extra motherboard PCB area and mechanical complexity to get rid of a few wire bundles.
Yes
So so close, yet not so close to being perfect. The PSU should be at the back, and the drives at the front. There is so few manufacturers that do it right (there are few from Supermicro, but I just gave up and built mine own from scratch instead). But really nice to see it has U.2, M.2, EDFF, and SATA. Everything doubled. Quite versatile indeed. 3 PCIe slots also great. I would prefer one of them to be OCP instead, but I guess that is not bad either. Idle 80W, oh, that is meh.
You could swap the rack ears and make that possible.
3:55 I think you mean RAID 1, not RAID 0...
You sure the rack ears aren't just on backwards?
They can be installed in either direction.
I've been all my running SFF and tower desktops back-to-front for years, only mod is to add a 2-way 2 pin dupont splitter on the front panel power button header and fit a push-button switch that I thread through a vent hole in the rear panel (hotglue it in place)
I've never understood why desktop cases have all the connectivity at the back out of sight and all the styling bullcrap up front.
Silverstone tried something different with the IO at the top instead of the back but I guess it never took off. Loved my raven case that was set up that way though.
It would be pretty silly if the rack ears weren't reversible. Are the fans in push or pull configuration?
From the pictures on Asus website, it seems that the fan are pulling the air from the psu side in both the R (ears on the traditional side) and F (ears on the psu side like the version shown in the video) versions.
I'd actually considering buying one, but I'd have not found any seller in Germany. Paper launch?
Love the lstopo diagram, always cool to see the topology for me to help conceptualize it better.
I totally agree. We try to put the lstopo or system block diagram in every server review (and most motherboards at this point.)
Are those 2 front NVMVE bays U.3 ready?
The Asus web page for this product does show the rack ears on the front, rather than the back.
The Asus website show both version, the F version has the rack ears on the side of the psu, the R on the other side.
It's good that it's been confirmed that it has the mounting holes for the ears to be used either way.
Can you make a movie about xeon lines of procesors? With Core I3 I5 I7 everything is easy. With servers not.
We usually cover these on the STH main site.
Is this video from before the move, or did you just set up your studio the exact same as it was in texas?
Great question. Published post-move. Recorded pre-move.
How much would that be? Can I put 300watt dual slot gpu in it?
It might work, but you might also need bigger PSUs with a 300W GPU installed.
can you get reverse airflow fans if you can indeed mount it around "the normal way"?
You get the RS2-R if you want normal orientation.
@@nadtz When you look at the pictures on Asus website, the fans seems to be mounted in the same orientation for both R and F versions (you can see the flow direction arrows on the fans). If that is the case, the R versions will blow hot air through the front. Definitely something you want to take into consideration when mounting those versions in a rack.
Yes.
What is best server for $1800 or below?
I've always loved the 'pizza box' form factor, this thing packs a lot of hardware into that tiny case.
Allowing you to use a 2 post rack in legacy locations these things are awesome. Supermicro really dominates this space still today.
$parc one up, Cheech 🙂
@@briceperdue7587 I probably built a few hundred 512L's once upon a time. Could knock one of those together in like 15 minutes but obviously it was nowhere near as dense as this server.
Awesome Review! keep it up..
Thanks
welcome@@ServeTheHomeVideo
Hey Patrick, do you plan on hosting your videos elsewhere (if RUclips succeeds in driving your audience away)? I subscribed to the feed on your website a while ago. What's the point of the kitchen sink storage options opposed to different versions of the server each support 1 one of the options (besides pcie card)? I wouldn't want to stock 5 different spare parts. On the other hand, I would want at least 3 of a given type for raid 5.
Probably not in the near term, but we might start doing vertical for elsewhere when we get the new studio in Nov.
Q: What can server vendors do with ARM v9 4nm with unlimited TDP?
4nm will help a lot. Remember though, the actual core IP on a modern CPU is actually very small compared to all the other IP for things like the fabric, memory controllers, PCIe controllers, and especially cache cells.
“Redundancy in raid 0”?
Redundancy in the chances of losing your data.
"redundancy or raid 0" is maybe what he intended to say?
It was a silent “or”, emphasised by the “whatever”
Perfect for backup
This is Scary RAID, you are scared of losing all of your data.
Love the variety of storage options. So fun.
Yea really a fun system!
I really don't like that most servers have the rack ears in the front.... Sometimes I want to do maintenance on the hardware, swap a bad memory module, a bad gpu, whatever, and with most servers I have to fully take the server out of the rack for that, because the lid of the chassis can't open, because the server can't be pulled out of the rack all the way. If I could pull those servers out the back tho that wouldn't be an issue at all...
Duh, that's why you get sliding rails. If you are cheap skating and getting static rails then sucks to be you.
I prefer always having rails. That is a minority preference however.
@@ServeTheHomeVideo I never understood that as it is a health and safety issue. If you don't have sliding rails then especially if the device is high up in a rack you are going to need two people for any maintenance rather than one. Even if you decide you don't care about dropping a server on yourself or doing your back in doing maintenance, dropping the server would turn simple maintenance such as replacing a faulty DIMM or fan etc. into a very bad day. Spoiling the ship for a hap'o'rth of tar frankly. At this point I take issue with vendors of Infiniband, Fibre Channel and Omnipath switches a pox on you all.
@@jonathanbuzzard1376ì7
The extra storage on the back of the server is nothing new. I seen server chassis with those when shopping around for a nee server chassis for Mt custom server build.
But FIVE different types of storage in pairs?
jesus the heatsink onm the VRM
This is some sexy hardware.
Very
How loud?
yes
Very "yes"
Hello guy i am ery realy know new technology and thank you for Express clear it
Im not sure why you'd want *different* kinds of storage? Just sounds like it's a pain to spec initially and then a pain to upgrade, let alone hold any spares for.
On the other hand, they are there. That was the amazing part.
I need it
Yes
dope
Very
@@ServeTheHomeVideo anything to replace my old R710s :)
Can you get an option on SFP+ because 10GBaseT is useless in a data centre. Oh and the X710 was first released in Q4 2014, so that is a nine year old chipset and note remotely recently. Clearly, you have been on the crack pipe. Had to double-check because I knew our compute nodes purchased in 2017 all have X710 NIC's.
Why is 10GBaseT useless in a data center?
There are a lot of folks still rocking X540-t2's and older. Also, the updated X710 10Gbase-T (check the X710-T2L for an example) is 2019. Fortville required a re-spin when it was still a 40GbE/ 10GbE SFP+ and so it went through revisions.
@@allanwind295 For lots of reasons. Firstly 99% of data centre switches are SFP+ because for starters they are cheaper and lower power so running costs are significantly lower. In a rack or even to an adjacent rack a DAC cable is lower power, and thus cheaper. If I need more than can be managed with a DAC cable well its SFP+ so I will just stick in some SR transceivers and go fibre optic. Break out DAC cables are awful to work with and super inflexible. Much better to go optical and breakout to a patch panel and get a bunch of flexibility. I have hundreds of 10Gbps ports at work and not a single 10GBaseT amount them. If someone presented me with a server that had them my first port of call would be to swap the NIC for something that was SFP+ I was also taking with the out head of networks (I work in HPC) and they don't do 10GBaseT either. There is the grand sum in the entire university of one port using a transceiver because some dum research group went out and broke procurement rules and purchased a DGX box and want to use it's 10GBaseT port. They are a problem group anyway because they didn't consult on anything before making the purchase, so we are having to do a major power upgrade to accommodate them.
Well this clearly position to be deployed in an edge environment. Copper isn't wrong, and it has room for expansion
Every new video Pat looks bigger. Watch your weight 😂
This was recorded near the end of 32 flights in August/ September. Very rough :-/
ر
Second! :)
Geez take a breath, constant chat is off putting had to leave.
First!
I want the motherboard inside this thing. What is the part number? @servethehome
تكبير. اليتيوب