Это видео недоступно.
Сожалеем об этом.

1u Servers are DEAD! Long Live 2u Servers! But Why? -Ft. Supermicro AS -2114GT-DNR

Поделиться
HTML-код
  • Опубликовано: 17 окт 2022
  • Is this was the future looks like? More like, this is what the future will sound like! At least until they make quieter fans. Join Wendell as he goes over his 2u GamersNexus project! Is it a super computer? Pretty much! Is it expensive? Yeah, $70,000! Is Wendell like a little kid in a candy shop? You Betcha!
    System Specs: www.supermicro...
    GPU Specs: www.amd.com/en...
    GamersNexus Vid: • 3000W AMD Epyc Server ...
    **********************************
    Check us out online at the following places!
    linktr.ee/leve...
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music: "Earth Bound" by Slynk
    Other Music: "Lively" by Zeeky Beats
    Edited by Autumn

Комментарии • 313

  • @SimmiesSchrauberChannel
    @SimmiesSchrauberChannel Год назад +354

    One day apart in 2 videos: Linus "I want all my LAN-PCs in 1U, so I don't waste 1 rack-slot" - Wendell "1U is dead cause 2U is more efficient" XD

    • @handlealreadytaken
      @handlealreadytaken Год назад +44

      Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.

    • @Blustride
      @Blustride Год назад +48

      In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.

    • @wiziek
      @wiziek Год назад +63

      Linus isn't technical person.

    • @EminemLovesGrapes
      @EminemLovesGrapes Год назад +27

      @@wiziek Nowadays he basically outsources all of the knowledge and throws either his money or his influence at the wall.

    • @Mallchad
      @Mallchad Год назад +9

      @@handlealreadytaken His idea's were unsustainable and ended up in
      "I need 1 rack per computer", which pretty quickly devolves into an explosion of racks...
      Prob best not to buy a new rack every time he has a new idea :P

  • @GeoffSeeley
    @GeoffSeeley Год назад +103

    @1:39 the 1U servers aren't dead, they're just huddled together in 2U chassis for warmth.

  • @JoshLiechty
    @JoshLiechty Год назад +233

    Having spent some time with multi-node chassis-based systems like this, my vote for a collective noun for a group of servers goes to "a cacophony."

    • @MiIIiIIion
      @MiIIiIIion Год назад +114

      Alternatively: "A tinnitus of servers".

    • @Level1Techs
      @Level1Techs  Год назад +69

      I am getting such a kick out of these replies

    • @waterflame321
      @waterflame321 Год назад +25

      How about a "whatt?!" Because you can't hear anything over the fans

    • @johnmijo
      @johnmijo Год назад +5

      A *MULTIPLICITY* of Nodes/Servers ?

    • @jannegrey593
      @jannegrey593 Год назад +10

      "Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.

  • @UntouchedWagons
    @UntouchedWagons Год назад +29

    A gaggle of those servers would certainly murder my power bills, and my ear drums.

  • @johntotten4872
    @johntotten4872 Год назад +4

    Legend has it headphone users ears are still bleeding.
    A. Scream of servers?

  • @jacobnoori
    @jacobnoori Год назад +16

    Finally, more server content! Please make them more frequently!

  • @PhoeniXfromNL
    @PhoeniXfromNL Год назад +48

    it's always nice when Wendell is excited about something

  • @Gilgwathir
    @Gilgwathir Год назад +4

    Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!

  • @TwistedD85
    @TwistedD85 Год назад +23

    I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D

    • @robr4662
      @robr4662 Год назад +11

      You may not be able to afford this but used enterprise stuff can be had extremely cheap and you can have almost as much fun. ;-)

    • @morosis82
      @morosis82 Год назад +4

      Some of the older x10 platforms from Supermicro are getting somewhat affordable these days, the twin family of servers aren't crazy anymore.

  • @MrLamrod174
    @MrLamrod174 Год назад +2

    A serfdom of servers 😅
    Also, I hope you had hearing protection while in your comms room! That node was SUPER loud!

  • @keithpetrino
    @keithpetrino Год назад +4

    A racket of servers. A reference to the fact that they're in racks but also to the noise.

  • @dismafuggerhere2753
    @dismafuggerhere2753 Год назад +8

    a whole restaurant of servers ?
    I'll show myself out

    • @acubley
      @acubley Год назад +2

      You got a gen-u-wine laugh out of me!

  • @Chloiber
    @Chloiber Год назад +7

    We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros).
    While having multiple nodes so densely in a single chassis is great, it comes with a major downside:
    The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba.
    While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)

    • @Loanshark753
      @Loanshark753 Год назад

      @Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.

    • @jfbeam
      @jfbeam 9 месяцев назад

      Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)

  • @wyattarich
    @wyattarich Год назад +1

    Every time I see a new upload, I'm excited. I can't say the same about ANY other channels on YT. I love what you're doing Wendell-never stop!

  • @survey1010
    @survey1010 Год назад +14

    Thoughts on doing walk-through of your data center / "server room"? Would be interesting to see what you're running for day-to-day.

  • @MazeFrame
    @MazeFrame Год назад +2

    9:42 You can feel the current limiting making the fans start up slowly! Beauty!

  • @MarkRose1337
    @MarkRose1337 Год назад +12

    1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!

    • @christopherjackson2157
      @christopherjackson2157 Год назад +1

      It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.

    • @AndrewFrink
      @AndrewFrink Год назад +2

      Everything old is new again.

  • @mtothem1337
    @mtothem1337 Год назад +48

    I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.

    • @Blacklands
      @Blacklands Год назад +4

      (Is your avatar Lain with a crown of roses??)
      Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!

    • @jmwintenn
      @jmwintenn Год назад +4

      the server room is built to contain the sound. they dont care how loud the servers are as long as vibration is controlled.

    • @morosis82
      @morosis82 Год назад +5

      @@jmwintenn sort of true, but systems that need fans running at full speed constantly spend a lot of power budget on cooling and not computing.

    • @bernds6587
      @bernds6587 Год назад

      @@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed.
      When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization.
      For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM)
      The combination of density and heat production makes such loud and truly "moving" fans necessary.

    • @im.thatoneguy
      @im.thatoneguy Год назад

      @@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care.
      I had to write an ipmi hack script which does it on our nvme server because they offer no customization.
      Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.

  • @nihalrahman7447
    @nihalrahman7447 Год назад +15

    Wendell and LTT anthony should collab. Talk about general server stuff, linux distros and how to dominate the world.

    • @joemarais7683
      @joemarais7683 Год назад +7

      That’ll never happen. The powers that be would never let that much nerd power collect in one room

    • @alexmartinelli6231
      @alexmartinelli6231 Год назад +3

      That would be EXTREMELY cool. Hope it happens someday

  • @kevlarandchrome
    @kevlarandchrome Год назад +20

    I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.

    • @jimecherry
      @jimecherry Год назад +1

      banshee fans

    • @ghostbirdofprey
      @ghostbirdofprey Год назад +1

      Suddenly I wonder if there's a supercomputer or other cluster named "Banshee"

  • @markmulder996
    @markmulder996 Год назад +2

    And here is Linus (LTT) just now building five 1u gaming systems ;)

    • @CycahhaCepreebha
      @CycahhaCepreebha Год назад +1

      To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway.
      Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place.
      The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.

    • @markmulder996
      @markmulder996 Год назад

      @@CycahhaCepreebha i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :)
      But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.

  • @t.m.grokas6832
    @t.m.grokas6832 Год назад

    I paused @7:23 and accidentally discovered your next video's thumbnail. Editor Autumn, you're welcome.

    • @Level1Techs
      @Level1Techs  Год назад +1

      That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn

  • @nukedathlonman
    @nukedathlonman Год назад +1

    Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.

  • @jackhildebrandt7797
    @jackhildebrandt7797 Год назад +3

    Dang I was excited for Wendell to look one of the cray ex liquid cooled nodes.

  • @halbouma6720
    @halbouma6720 Год назад

    I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!

  • @wskinnyodden
    @wskinnyodden Год назад +1

    So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P

  • @LiLBitsDK
    @LiLBitsDK Год назад +3

    watching Wendell booting up a server being blasted by the air is like watching a kid in a giant candy store for the first time in their life :D

  • @scarkillerful
    @scarkillerful Год назад +1

    "Definitely think you'll find that appealing"
    god fucking dammit😂

  • @velo1337
    @velo1337 Год назад +2

    it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots

  • @Verhagenvictor
    @Verhagenvictor Год назад +5

    Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?

  • @andljoy
    @andljoy Год назад +4

    9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D.
    This server just screams VDI at me.

  • @ajr993
    @ajr993 Год назад +3

    Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.

  • @TheClumsySpectre2
    @TheClumsySpectre2 Год назад +5

    Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.

  • @declanmcardle
    @declanmcardle Год назад +2

    @8:20 - "it's an older cord, but it checks out..."

  • @llortaton2834
    @llortaton2834 Год назад +1

    AHAH, jokes on you wendell, my 4U ATX compliant consumer grade server will NEVER DIE :D

  • @Phynix72
    @Phynix72 Год назад +1

    Reading your thumbnail, Linus is crying over his recent build. From far continent I can hear "Why Wendel ? why ?"🤣

  • @MarkRose1337
    @MarkRose1337 Год назад +8

    Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.

    • @AndirHon
      @AndirHon Год назад

      box·​en | \ ˈbäksən \
      Definition of boxen
      archaic
      : of, like, or relating to boxwood or the box

    • @MarkRose1337
      @MarkRose1337 Год назад +1

      @@AndirHon I prefer the Jargon file definition:
      boxen: pl.n.
      [very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.

  • @tvmcrusher
    @tvmcrusher Год назад

    7:41 From here on out you can hear the maddening sound of an SPC being nearby.

  • @Paktosan
    @Paktosan Год назад +1

    So this basically is the comeback of the BladeServer just on a smaller scale?
    We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.

    • @JaeTLDR1
      @JaeTLDR1 8 месяцев назад +1

      Blades share way more. This is just power and cooling being shared

  • @Dan-Simms
    @Dan-Simms Год назад

    Clicking the link and commenting here for your engagement. Cheers bud, keep up the great work!

  • @somehow_not_helpfulATcrap
    @somehow_not_helpfulATcrap Год назад +3

    What do you hear when you put your ear up next to a 1U server fan?
    Nothing from then on.

  • @BigHeadClan
    @BigHeadClan Год назад

    One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was.
    If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.

    • @jfbeam
      @jfbeam 9 месяцев назад

      His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)

  • @leviathanpriim3951
    @leviathanpriim3951 Год назад

    Wendell and Steve, sit down nerds the chosen ones are on screen

  • @R055LE.1
    @R055LE.1 Год назад +3

    Haven't blades been following this principle for like.. ever?

  • @probusen
    @probusen Год назад +3

    Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)

    • @jfbeam
      @jfbeam 9 месяцев назад

      No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)

  • @GooberBrainTrollingCorp
    @GooberBrainTrollingCorp Год назад

    7:40 THIS LOOKS AND SOUNDS LIKE AN INTRO TO A HORROR MOVIE

  • @majstealth
    @majstealth 11 месяцев назад

    this will be a cramped and warm hot-aisle-job to maintain these

  • @bret44
    @bret44 Год назад +2

    Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?

  • @andreas7944
    @andreas7944 Год назад +1

    If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.

  • @silverphinex
    @silverphinex Год назад +3

    i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown

    • @raven4k998
      @raven4k998 Год назад +1

      well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣

    • @KomradeMikhail
      @KomradeMikhail Год назад

      I fell asleep on a helicopter flight....
      You can get used to anything over time.

  • @zector0
    @zector0 Год назад +1

    Imagine how his mind will explode the first time he sees a BladeCenter.

  • @solidreactor
    @solidreactor Год назад +4

    Is there a benefit to go even further with a "4U 4-Node" configuration? Or are there some diminishing returns after a 2U 2-Node config?

    • @WilReid
      @WilReid Год назад +4

      The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks.
      3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.

  • @Dexerinos
    @Dexerinos Год назад +1

    I saw that!!! You didnt screew-in the rail screws :P

  • @losttownstreet3409
    @losttownstreet3409 Год назад

    Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.

  • @goblinphreak2132
    @goblinphreak2132 Год назад

    I just realized the music you use gives me "contraption zack" vibes. if you remember that game from the dos days.

  • @aliancemd
    @aliancemd Год назад

    7:16 That face to what he just said, reminds me of Family Guy :)

  • @chrisbaker8533
    @chrisbaker8533 Год назад +2

    I like the compute density, but that backwards mounting is a deal killer for me.
    Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.

  • @boomerau
    @boomerau Год назад

    I've also seen the side-by-side HP HP Left & Right GPU 4RU servers. Basically this is a change in blade chassis form factor and capital investment.

  • @willcurry6964
    @willcurry6964 Год назад

    You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.

  • @ETtheOG
    @ETtheOG Год назад +2

    A "Banquet of Servers" maybe :o?

  • @JW-uC
    @JW-uC Год назад +1

    Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still.
    That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.

  • @nicholaswoods9066
    @nicholaswoods9066 Месяц назад

    Thank you for the informative video,
    Cheers mate

  • @TheBitKrieger
    @TheBitKrieger Год назад +2

    So we came full circle and blade centers are cool again?

  • @Deveyus
    @Deveyus Год назад +1

    Plural of servers? A Ruckus.

  • @SlurP667
    @SlurP667 Год назад

    *opens server room door* I can hear the children screaming!

  • @KangoV
    @KangoV Год назад

    They are the same cables I have throughout my house :) Cool video :)

  • @prashanthb6521
    @prashanthb6521 Год назад +2

    4U with silent 120mm fans will be nice.

    • @Blacklands
      @Blacklands Год назад +1

      There's a bunch of cases on the market for this now! Some even support liquid cooling. Sliger makes some (expensive though).

  • @elikirkwood4580
    @elikirkwood4580 Год назад

    This one server, in 2u of rack space has more compute power than my entire house with several servers and gaming desktops in it

  • @NathansWorkshop
    @NathansWorkshop 6 месяцев назад

    5:50 RAWWWWWWWWWWWRRRRRRRRRR

  • @airman_85uk
    @airman_85uk Год назад +1

    Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….

    • @muadeeb
      @muadeeb Год назад

      I have an old 4 node system that I use as a Virtualization cluster

  • @movax20h
    @movax20h 10 месяцев назад

    The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power.
    If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network.
    I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox).
    2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.

  • @Technopath47
    @Technopath47 Год назад

    All I can think is that the Frontier supercomputer shares a name with the worst ISP I've ever had the misfortune of dealing with.

  • @jfbeam
    @jfbeam 9 месяцев назад

    2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.)
    (To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)

  • @todayonthebench
    @todayonthebench Год назад +1

    In short. The main advantages of blade systems are still relevant.
    Shared redundant power and cooling.
    Though, blade systems also tends to toss on shared management as well as networking.

  • @dangerwr
    @dangerwr Год назад

    (Australian accent) And here we see a wild Wendell in his natural habitat.

  • @uncivil_engineer8013
    @uncivil_engineer8013 Год назад

    A Butler's Pantry of servers

  • @AlwaysStaringSkyward
    @AlwaysStaringSkyward Год назад

    @Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.

  • @mhavock
    @mhavock Год назад +1

    We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣

  • @AriBenDavid
    @AriBenDavid 6 месяцев назад

    Yes, "Murder" is the plural for crows. Servers?: a "noise"?

  • @wskinnyodden
    @wskinnyodden Год назад

    Plural of Servers: A Cadre of Servers!

  • @deilusi
    @deilusi Год назад

    IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play.
    whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.

  • @fracturedlife1393
    @fracturedlife1393 Год назад +1

    An Epyc of Servers

  • @KingTheRat
    @KingTheRat Год назад +1

    HP C7000 has entered chat

  • @chrsm
    @chrsm Год назад

    Sounds like my colleague's laptop with a "couple" of chrome tabs open

  • @asdkant
    @asdkant Год назад +1

    A whole restaurant of servers?

  • @jp-ny2pd
    @jp-ny2pd Год назад +1

    Personally I'm a fan of the Supermicro MicroCloud servers for our colo. We deploy the 8-node configuration because we like being able to swap the drives without downing the node or running into spacing issues with PDUs in the back of the rack. The 12 and 24 node solutions are nice but a bit more of a pain to do any sort of maintenance on and less tolerant of rack configurations.

  • @Oil_of_Hope
    @Oil_of_Hope Год назад +1

    At start-up it sounds like an air-raid siren from ww2😃

    • @acubley
      @acubley Год назад

      The V-8 powered ones?

    • @Oil_of_Hope
      @Oil_of_Hope Год назад

      @@acubley ruclips.net/video/WgaCNEQzL1Q/видео.html

    • @acubley
      @acubley Год назад

      @@Oil_of_Hope Ah, ty, thought you might mean ruclips.net/video/l04qWEEPFEk/видео.html

  • @GameCyborgCh
    @GameCyborgCh Год назад +1

    a full restaurant of servers

  • @danmenes3143
    @danmenes3143 Год назад

    Well, with those fans, a "cacophony" of servers? Maybe a "din" of servers?

  • @chadmckean9026
    @chadmckean9026 Год назад

    Your making it double plural, we do not say school of fishes

  • @Elemental-IT
    @Elemental-IT Год назад

    I have that same rack monitor, but some idiot cut the cord to the Monitor as well as the keyboard / mouse combo. the VGA was a PITA, but standard.... and I had both parts. The keyboard is not standard, and I am missing the connectors. I really wish I had a way to figure out the pinout because 8 wires seems like it should be 2 PS/2 connectors.

  • @drgti16v
    @drgti16v Год назад

    Peter Griffin of IT!!!!!!

  • @chromerims
    @chromerims Год назад

    9:35 --- ultra-fine grade server-caterwauling p3wnage. Thank you.

  • @red5standingby419
    @red5standingby419 Год назад

    Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.

  • @JamieStuff
    @JamieStuff Год назад +1

    If rack mount, is it "a scream of servers"???

  • @Timi7007
    @Timi7007 Год назад +1

    Blade servers all over again^^

  • @eh5806
    @eh5806 Год назад

    "A furnace of servers"

  • @IAMANTWI
    @IAMANTWI Год назад

    A 'plural' of servers would be a good group noun.

  • @applicablerobot
    @applicablerobot Год назад +1

    Someone forgot to tell Linus

  • @beauregardslim1914
    @beauregardslim1914 Год назад

    This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.

    • @cynicaloutlook
      @cynicaloutlook Год назад +1

      They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series