FASTEST Server Networking 64-Port 400GbE Switch Time!

Поделиться
HTML-код
  • Опубликовано: 16 сен 2024

Комментарии • 156

  • @josephesposito9173
    @josephesposito9173 Год назад +19

    "This was a bad idea and It just keep happening, I don’t know what to tell you." Love it and can certainly relate. I'm glad you can get in trouble so I don't have to.

  • @nzalog
    @nzalog Год назад +70

    Hey thinking of upgrading my home network, will this switch be enough for 1.2Gb Comcast? I don't want to bottle neck it, I'm paying big money each month.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +57

      You may have to wait for the new 800GbE generation later this year. :-)

    • @letterspace1letterspace266
      @letterspace1letterspace266 Год назад +3

      ​@@ServeTheHomeVideo i wonder how much bandwidth an AI cluster needs to flow. Is this for serious carrier backbone?

    • @unlucky1307
      @unlucky1307 Год назад +1

      @@letterspace1letterspace266 AI clusters can run on insane connections exceeding the 400G connections used here, but they can also just run on a single machine instead of as a cluster and still perform shockingly well if optimized.

    • @JeffGeerling
      @JeffGeerling Год назад +6

      Haha nice

  • @Gabi-ct3sz
    @Gabi-ct3sz Год назад +31

    Mikrotik tomorrow morning be like: This is our CRS440-6QDD-1G-RM new 400 Gbps switch at just 2000$. Harmful joke, we love our Mikrotik friends and hopefully they love us back

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +9

      If only! But then we would know it would be a 100% hot mess.

    • @jfbeam
      @jfbeam Год назад +2

      When that SoC ceases to cost north of $10k, maybe they will.

  • @edc1569
    @edc1569 Год назад +10

    Looks like a perfect switch for the pool house

  • @rush2489
    @rush2489 Год назад +13

    At work we are putting together two vmware clusters using a pair of Aruba 9300 switches (32 ports of 400G) as the backbone. vSAN storage with all NVME is going to love the bandwidth.

  • @skullpoly1967
    @skullpoly1967 Год назад +69

    Linus needs to see this

    • @blueguitar4419
      @blueguitar4419 Год назад +6

      This is too good for Linus

    • @n_raz
      @n_raz Год назад +1

      He needs to quadruple this.

    • @brunobmartim
      @brunobmartim Год назад

      Totally agreed with you

    • @id104335409
      @id104335409 Год назад +1

      Here we go again...😅

    • @lauraprates8764
      @lauraprates8764 Год назад

      Imagine if he drops the chip while talking about its capabilities

  • @TheChadXperience909
    @TheChadXperience909 Год назад +4

    What a BEAST! I'm glad that you guys went and showed this off. I got trolled by a guy suggesting that I use infiniband (which I'm already familiar with) at home on one of my recent comments. I also understand that it is a total BEAR to set up and get running at the correct speeds, etc. Totally NOT the sort of solution I'd be considering for home use. I've seen other RUclipsrs try to install old used enterprise infiniband gear in their home, and it's always funny to watch them suffer. Too bad you didn't show all the fun bits of yourselves wrestling the bear. Although, it's not very exciting for non-nerds to watch. I'm surpised you could even power this thing. You'd need to build an internal Chernobyl just to spin up the fans, alone! (Exageration intentional for comedic effect) I'm afraid the electric company would send an investigator if I installed this in my home.

  • @I4getTings
    @I4getTings Год назад +14

    Hi Patrick! Very cool. I mostly think of 400gig switches as core/ super-spine switches. But like you said, I could see it being great for breaking out into a LOT of 100gig links without any over-subscription between them. It is odd to think of 400gig as kind of slow, but if you had a bunch of 32 port 100gig switches with a 2 port lag up to the 400gig switch, that is 3200 down and 800 up. So about 4 to 1 oversubscribed uplinks. If instead you could break out a lot of 400gig ports into 100gig, and go straight to the server, there is no oversubscription between all of the connected servers.

    • @autohmae
      @autohmae Год назад +2

      When you mentioned core, I thought: I think most large companies use routers as core. So I looked up the product and it said right in the title: "64-Port Ethernet L3 Data Center" So yeah. It's also always fun to see in the spec sheet they mentioned Ansible supported in the automation tools. But what is interesting it supports RDMA over Converged Ethernet as well. Which is something I wouldn't have seen many years ago. L3 and RoCE in the same box (a switch would do just L2 and RoCE or a it was a L3 switch). My guess is it's a lot more common these days.

  • @LSUEngineer1978
    @LSUEngineer1978 Год назад +2

    Just more awesome, cool content from Patrick K & STH. Yes it's nerdy. Of course it is! We learned about Mikrotik's 4 port x 10Gb switch here from STH years ago and other OEM 25Gb & 100 Gb switches, too. Where else can you see an FS 64 Port x 400GB switch under the hood. You can count on Patrick. He will remove tell you how many screws it takes to get inside the device. And PK will remove the Heat sink, the CPU, the Fan Modules & PS Modules every time. And you get discussion, use ideas, performance results, connections parts, etc. It doesn't get better than that. STH just helps get my Nerd Habit filled each week along with the other online tech nerd content. Keep up the good work STH!

  • @AaronPace93
    @AaronPace93 Год назад +3

    At my work, we are refreshing a dc and where 400gig fits for us is more in the spine to super spine/core layer. We just don’t have hosts doing that high of speed, but most are moving from multiple 10gig connections to 25s. I would be jealous when there is a need for direct host 400gig!

  • @BarneyKB
    @BarneyKB Год назад +10

    I love videos like this. Absolutely crazy. One thing I've always wondered: We hear about the huge undersea cables with huge bandwidths. How does the data get into these cables? Is it stuff like this? Would be cool to see.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +14

      Noted. Will see what we can do on this one

    • @BarneyKB
      @BarneyKB Год назад

      @@ServeTheHomeVideo Awesome :D

    • @BouhadiTahraoui
      @BouhadiTahraoui Год назад

      ​@@ServeTheHomeVideoییی

    • @2xKTfc
      @2xKTfc Год назад

      @@ServeTheHomeVideo Serve the *home* plans a video on undersea fiber trunk lines. To, you know, serve the Gameboy Color in the home 😅

  • @legominimovieproductions
    @legominimovieproductions Год назад +3

    Finally some reasonable connection for my truenas at home

  • @jasper221176
    @jasper221176 Год назад +3

    I love you breakdown a switch in parts. Now I understand how it works. And yes 400gb is crazy. But 20years ago 1Gb in a server was crazy to. Thanks for a peak in the further 😎👍

  • @DrivingWithJake
    @DrivingWithJake Год назад +2

    Gotta love these we just picked up one of their 32x100g switches to play around with.

  • @Buciasda33
    @Buciasda33 Год назад +5

    I hope I will live long enough to get a 400Gbps Internet connection as a home user.

  • @__--JY-Moe--__
    @__--JY-Moe--__ Год назад +4

    super presentation Patrick! the server space is really moving fast! I'm glad U didn't show us a server bubbling in a cooling vat! but it sounds like this one gets warm.
    I hope some1 has some winning cooling strategies out there! good luck Pat..

  • @jk-mm5to
    @jk-mm5to Год назад +20

    ServeTheHome are you at the Gates' home?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +20

      Well "home" is the /home/ directory in Linux.

    • @Tgraves2976
      @Tgraves2976 Год назад +8

      ​@@ServeTheHomeVideolol, Home is where my user directory lives

    • @QuantumConundrum
      @QuantumConundrum Год назад +1

      You know, I always thought of ServeTheHome as ServeTheHome(lab)... Could be a good spinoff channel? I never thought about ServerThe/home/ until now.

  • @popquizzz
    @popquizzz Год назад +2

    Being a network guy, I am struggling with the OSFP acronym used here and i am sure when i start ordering these types of connectors/cables for these applications i will be corrected many times in conversation that overlap the two technologies. My one saving grace, 400Gb switching i will certainly see a lot of, 400Gb routing, I'm not so sure I will see a lot of this before I retire in another 10 years

    • @feedmytv
      @feedmytv 8 месяцев назад

      we're deploying 400gbit routing on nokia platform. it's a step up from giant 100gbit lacps to 36 port 400gbit line-cards.

  • @chrish8941
    @chrish8941 Год назад +6

    love the sharpie marks on the socket screws. Did you find a torque spec for the switch-chip socket or did you YOLOOOOO

  • @computersales
    @computersales Год назад +17

    Something I think would be interesting to see is the cost difference of being bleeding edge back in the 100Gb days vs now with 400Gb. I know $60,000 is a lot of money but at the same time it seems cheap for what you were getting and what you could do with it if you have the ability to utilize it.

  • @sarhtaq
    @sarhtaq Год назад +1

    Sure love to see this kind of videoes.
    As one who works at a school with 1G access and 10G between Spine/Leaf, speeds like this is unoptanium for us right now ;)

  • @aleasd7905
    @aleasd7905 Год назад +6

    Time to upgrade my home network I guess

    • @computersales
      @computersales Год назад +2

      1Tb switches come out next year if you wait. 🤪

    • @efimovv
      @efimovv Год назад +1

      A little bit noisy equipment...

  • @drk_blood
    @drk_blood Год назад +2

    Linus be like:
    " Ok, I want 4 ! " 🤦🏻‍♂️😂

  • @FelipeQueirolo
    @FelipeQueirolo Год назад +5

    IT Support back in the cable lender's org: "Hmmm I wonder why this server is unexpectedly under maintainance?"

  • @LtdJorge
    @LtdJorge Год назад +9

    Hey Patrick, do you not have a subscription system for the website like Phoronix does, so we can read without ada while contributing? I feel bad having uBlock on, specially since I'm transitioning to 10G (switching) and 40G (direct) at home, and it's mostly thanks to you guys and your awesome reviews and posts.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +16

      We do not, but if you see from the screenshots we have an external firm sell all of the desktop inventory and most of the mobile. If you are on the desktop you will only see ads from our servers about server, storage, and networking things. We do not have bidding, auto roll video, moving ads, and etc because I do not like them.

    • @LtdJorge
      @LtdJorge Год назад

      @@ServeTheHomeVideo great, thank you!

  • @threepe0
    @threepe0 Год назад +2

    I wish I had found this channel so much earlier. I really find your gear overviews entertaining and informative. It's obvious you love what you do, and that sort of enthusiasm is contagious.
    Even though i will never be able to justify a 60k switch at home, you've provided a whole ton of ideas and concepts that have been useful both at home and at work.
    Selfishly seeking out your videos focused on more affordable gear though haha

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +1

      :-) thanks. We cover the spectrum and publish at least once a day on the STH main site

  • @witzbereit
    @witzbereit Год назад +2

    Some die shots of the switch chip would be quite interesting too

  • @okoeroo
    @okoeroo Год назад +2

    Well, you could also opt for a PowerPC gen 9 pr 10. Try that for size.

  • @TheJonathanc82
    @TheJonathanc82 Год назад +1

    Oh to have the money, data closet, and power hookups to run this in my house… I can dream.

  • @ewenchan1239
    @ewenchan1239 Год назад +4

    Wow. 9% difference in throughput between ETH and IB.
    On 100 Gbps IB, the difference that I measured was only about 3%.
    $6k for 400 Gbps IB is actually NOT that bad when you think about $/Gbps (for a point-to-point connection).
    I use 1 GbE as the "management" port these days.
    Server consolidation meant that I was able to SRHINK my network (rather than to expand it).
    But on my systems that have the space and is able to support it, I have 100 Gbps going to my Ryzen 9 5950X HPC compute nodes and also a 100 Gbps going to my Core i7 3930K which runs my LTO-8 tape backup system.
    It takes the "stress" off the 1 GbE layer and moves it completely over and the 100 Gbps can handle a lot more things happening at the same time.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      Well that is also a direct connect IB versus through the switch 400GbE.

    • @ewenchan1239
      @ewenchan1239 Год назад +1

      @@ServeTheHomeVideo
      Yeah.
      I forget what my point-to-point ETH bandwidth losses were compared to a point-to-point IB bandwidth.
      I don't remember if I ran my testing on the 100 Gbps IB through my switch or if it was a point-to-point test.
      I would think that the bandwidth losses should be lower on the point-to-point side (because you aren't going through a switch), but I can't say that definitively.
      Very cool.
      100 Gbps IB is fun.
      People give you REALLY interesting looks when you tell them that's what you run in the basement of your home.

  • @Alan.livingston
    @Alan.livingston Год назад +1

    Great project. Why do we climb a mountain? Because we can!

  • @OVERKILL_PINBALL
    @OVERKILL_PINBALL Год назад +2

    I'm just here for the awesome enthusiasm!
    #PatrickTherapy

  • @jcarales1762
    @jcarales1762 Год назад +1

    Pretty Big Switch!

  • @MenkarX
    @MenkarX Год назад +2

    This is not ServeTheHome, this is ServeTheDatacenter...

  • @ArsenioDev
    @ArsenioDev 26 дней назад +1

    it's totally wild that they SOCKETED that switch chip vs soldering it down in BGA format.
    Wonder why

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  26 дней назад

      Yea crazy, but I also bought a socketed TH4 for the studio backdrop.

  • @MarkBarrett
    @MarkBarrett Год назад +1

    My understanding is Infiniband can do direct memory access and PCI-e over fiber, but when used for internet, TCP IP protocol is emulated.

  • @k34561
    @k34561 Год назад +1

    So one misplaced can of Jolt can take down all the networking for 256 servers at once. Amusing;-)

  • @kellymoses8566
    @kellymoses8566 Год назад +1

    400Gbps Ethernet would be awesome for hyperconverged storage.

  • @jeremybarber2837
    @jeremybarber2837 Год назад +1

    Do you have a post on the main site about network tuning? I’d love to read through it as we’re going to be deploying 100G clients soon. Thanks for the great content!

  • @OVERKILL_PINBALL
    @OVERKILL_PINBALL Год назад +2

    Nobody is allowed to complain about 10Gbe prices anymore! lol

  • @philliumo
    @philliumo Год назад +1

    Can you guys talk about the differences between Ethernet and Infiniband networking at some point for those of us who hear the different names thrown around but don't actually understand what the functional differences are?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      Maybe a good summer topic. The easy way to think of it is that Ethernet is what just about everything runs on. InfiniBand started as a storage focused high-performance solution and has morphed over the years into a HPC and AI specific low latency / high bandwidth interconnect.

  • @bits2646
    @bits2646 Год назад +1

    I want that heatsink :)

  • @steveschiets8031
    @steveschiets8031 10 месяцев назад

    Great video. I'm going in as an Spectra7 investor assuming that the demo Cisco DAC cable must be made by them. You speak about heatsink attached on the DAC cable, does this mean it's an active copper cable or ACC?

  • @ronaldagorsah7954
    @ronaldagorsah7954 Год назад

    Thanks for that video but I would be nice to see more usecases

  • @nicoladellino8124
    @nicoladellino8124 Год назад

    Very very impressive, THX.

  • @firatcoskun5689
    @firatcoskun5689 Год назад

    Hi Patrick, How were you able to connect the OSFP HCAs to the QSFP-DD ports on the switch for the Ethernet testing? I just re-watched the video, starting at the 12:30 mark, where you're referring to "funky optics", I'm assuming these are QSFP-DD w/ MPO on the Switch Side and OSFP w/ MPO on the HCA side? I also read the Main Site Article, where you also talk about having to navigate different signaling speeds in-between the optics. I then looked in the STH forums and read the 2 threads discussing OSFP, and posted on one of them. Any details you can share would be much appreciated.

  • @rustusandroid
    @rustusandroid 7 месяцев назад

    So, the nvidia cables you cant use? Kinda confused me on that one. Why would they make a DAC cable that literally cant fit onto the sockets?

  • @loshan1212
    @loshan1212 Год назад +1

    Cool, now do the same with 800Gb :)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +1

      We have seen a few 32x800G switches on the STH main site. We really need 800GbE NICs and PCIe Gen6 for that

  • @grantfahey4439
    @grantfahey4439 Год назад +1

    Don't mind me, just sitting here with my plebian 10Gb Mikrotik switch and my cat 6a.

  • @daniellundin8543
    @daniellundin8543 Год назад

    Very interesting! Thx!

  • @kaih.4687
    @kaih.4687 Год назад +1

    looks into wallet: "yeah nah, that isn't in the budget"

  • @silenthill4
    @silenthill4 Год назад +1

    800G has been running for a few years now

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +1

      800G switches are going to be more common later this year. I think we will have the announcement of one next week

  • @rayl6599
    @rayl6599 Год назад

    I am confused. I thought 400G optics uses 100G SerDes times 4 lambdas -- so effectively, with a server, you only get one 100G lane on one lambda?

  • @shanent5793
    @shanent5793 Год назад

    Why does a direct-attach cable need cooling? Doesn't it connect directly to the switch chip SERDES?

  • @hatclub
    @hatclub 4 месяца назад

    The idea that someone might give FS $55k for their whitelabel of some switch where future software support is basically "????" afaict, and they can't even be assed to align the label around the management port properly, is wild to me

  • @ChickenPermissionOG
    @ChickenPermissionOG 2 месяца назад +1

    No one would ever saturate this in a home network.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  2 месяца назад

      I actually had this running in my home network for a video that we never published. Then again, not everyone has ~1600 fiber strands running through their walls.

  • @lavavex
    @lavavex Год назад

    I’ll take “Things that I could never afford or handle” for 600 Pat.

  • @SaltyJoeTheFox
    @SaltyJoeTheFox Год назад +1

    8x 130W blowback fans consumes 1kW (peak) only for cooling, lul. Like an every bladeserver chassis

  • @legominimovieproductions
    @legominimovieproductions Год назад

    Entire DE-CIX probably running on two racks of those XD

  • @amessman
    @amessman Год назад

    Switches with socketed processors... I am drooling

  • @ellenorbjornsdottir1166
    @ellenorbjornsdottir1166 Год назад +1

    Could probably firewall an entire small country with this. Idk.

  • @PR1V4TE
    @PR1V4TE Год назад +5

    I love the content you are bringing in. Just think if someone is doing ddos with 2 of this devices. 😂😂 100+ terabits of data coming in. Sheesh.

    • @bobbydazzler6990
      @bobbydazzler6990 Год назад +4

      That's a pretty stupid comment. Where are you going to get 50Tb or 100 Tb Internet connectivity for a DDOS attack from *two!devices*? Think before you comment. 🙁

    • @PR1V4TE
      @PR1V4TE Год назад +1

      @@bobbydazzler6990 mate I can surely say you never worked as it network administrator. 😂 Do you know how much bandwidth we can get ? You can even combine bandwidth via load balancing. Learn something before you talk to an network administrator.

    • @bobbydazzler6990
      @bobbydazzler6990 Год назад +4

      @@PR1V4TE 1. I've never had the title "Network Administrator". That title is for putzes who spend 80% of their time messing around adding users or printers in Active Directory.
      2. According to my resume and LinkedIn profile, I've had the title "Sr. Network Engineer" for a number of global companies.
      3. I've forgotten more about Networking than you will ever know.
      4. Don't you have some shitty HP Procurve switches to deploy instead of playing around on RUclips? 🤣
      5. Have a pleasant weekend.

    • @PR1V4TE
      @PR1V4TE Год назад

      @@bobbydazzler6990 why you are saying whats your day job lol 😂

    • @twei__
      @twei__ Год назад +4

      ​@@bobbydazzler6990 "don't you have some HP procurve switches to deploy" man you didn't have to straight up insult him

  • @ajhieb
    @ajhieb Год назад +3

    Am I the only one that just assumes that most "phone a friend" calls on this level end up going to Wendell @Level1Techs? (Yes, I realize it's unlikely in this specific case)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +5

      If we have a phone a friend it usually goes to someone at a vendor, hyperscaler, or systems integrator.

    • @ajhieb
      @ajhieb Год назад +3

      @@ServeTheHomeVideo That's like telling a kid his Christmas presents came from Amazon instead of Santa.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +2

      Put Wendell and I together and it is pretty tiring: ruclips.net/video/oUugk0INTRU/видео.html (that was done in his hotel room late after a big steak dinner.)

  • @bdhaliwal24
    @bdhaliwal24 Год назад +1

    Cables that need heatsinks!?

  • @ellenorbjornsdottir1166
    @ellenorbjornsdottir1166 Год назад +2

    Now put a 400-gig card in the oldest computer you possibly can, using adapters if necessary - ultimate bottleneck! /s

  • @MelroyvandenBerg
    @MelroyvandenBerg Год назад +1

    wow great!

  • @aurahack8819
    @aurahack8819 Год назад +2

    When is LMG getting this installed?

  • @shephusted2714
    @shephusted2714 Год назад +1

    try it without the switch - 400g will drop in price, will be good for clusters - they will have 800g before you know it #cluster overhead

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +1

      800GbE will happen but it is hard to go to the node at that speed until PCIe Gen6

  • @preferencezilla
    @preferencezilla Год назад +1

    Lets go!

  • @MarkBarrett
    @MarkBarrett Год назад +1

    400Gb/s is probably going to be the realistic limit for a few decades.

  • @Aggrop0p
    @Aggrop0p 3 месяца назад

    Can't wait to do this for 500€ in 2034

  • @fanshaw
    @fanshaw 9 месяцев назад

    FASTEST Server Networking 64-Port 400GbE Switch Time!
    ServeTheHome
    🤣🤣🤣🤣🤣

  • @sativagirl1885
    @sativagirl1885 11 месяцев назад

    Garbage in at 400GbE, Garbage out at 400GbE. Buy now!

    • @whothefoxcares
      @whothefoxcares 11 месяцев назад

      Weed better. Slowdown. Light humor goes faster, eh?

  • @bravestbullfighter
    @bravestbullfighter Год назад

    When 8-port home version?

  • @matt.604
    @matt.604 Год назад

    How is this going to serve the home 😄

  • @DanielCardei
    @DanielCardei Год назад

    People will love to have 1GB Upload and download speed yet alone 400GB 😁

  • @pleappleappleap
    @pleappleappleap 6 месяцев назад

    I pushed 1.12Tbps a few years ago.

  • @thebigfut
    @thebigfut Год назад

    Honestly, I can not fathom 400GB of network speed. I can easily conceive that it happens and is the result of everything else being bolstered. But to actually functionally use it... I'm noping out of that one.

  • @Bawlk
    @Bawlk Год назад

    I don't get this channel - I thought it was all about home network not massive network equipment like this 🤔

  • @LaxmanLaxman-t6k
    @LaxmanLaxman-t6k Год назад +1

    😊😊😊😊😊😊😅😊😊😅😊😊

  • @tritech
    @tritech Год назад +6

    Yea, I'm out. There is nothing about "home" in "servethehome" anymore.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +13

      Hm, /home/ in Linux?

    • @LtdJorge
      @LtdJorge Год назад +19

      Who cares, if you're nerd enough to gave a homelab, you should be nerding out about equipment like this.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад +13

      @@LtdJorge Exactly.

  • @bekim137
    @bekim137 Год назад +1

    My entire country has 100gbit internet

  • @austinklenk9571
    @austinklenk9571 Год назад

    @LinusTechTips

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  Год назад

      I am working on getting them another switch that we reviewed in 2021 for their setup.

  • @dudulook2532
    @dudulook2532 Год назад +1

    Extremely powerful, sexy, and crazy device! I love it.