Trying out 40GBe, Does It Make Sense In A Homelab?

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025
  • НаукаНаука

Комментарии • 58

  • @alexanderg9106
    @alexanderg9106 11 месяцев назад +15

    Please keep in mind. This card's need active coming. A fan that move some air over them. So a 6cm fan at 5v will do it. Without airflow the cards will throttle down and even stop working at all.

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад +9

      Thanks for pointing that out. I was doing some testing and had extremely low speeds and dropouts. Then I noticed the syslog messages and the card had an extremely hot heatsink. A fan made the card work as normal again.

    • @esotericjahanism5251
      @esotericjahanism5251 10 месяцев назад

      Most 10gb NICs need active cooling. Especially if you don't have them in rackmount chassis. I stuck some X540s in a few old Lenovo thinkstation tiny 1L systems I have running in a HA cluster on Proxmox. I took the blower fan coolers off a few old Nvidia Quadro cards I had and modified them to fit onto my NICs and spliced the CPU fan connections to operate the blower fan, that keeps them running cool even under pretty intensive transfers. Did something similar for the NIC in my main desktop. Took a single slot gpu cooler off an old radeon pro card and a copper heatsink and modified it to fit. Then I just adapted the JST xh fan connector to a standard KF 2510 4 pin PWM fan connector and hooked it up to my fan hub with a temp probe to set a custom fan curve for it.

    • @TheRealMrGuvernment
      @TheRealMrGuvernment 7 месяцев назад +1

      This! many do not realise these are cards found in servers with high CFM pushing through the entire chassis

  • @ewenchan1239
    @ewenchan1239 11 месяцев назад +10

    Seven things:
    1) I'm current running 100 Gbps Infiniband in my homelab.
    For storage, because my backend servers are all running HDDs, therefore; I think that the absolute maximum that I have ever been able to push was around 16-24 Gbps (block transfers rather than file transfers).
    In nominal practice, it's usually closer to around 4 Gbps max. (~500 MB/s)
    But again, that's because I am using 32 HDDs on my backend with no SSD caching. (SSDs are the brake pads of the computer world -- the faster they go, the faster you'll wear them out.)
    2) For NON-storage related tasks, my HPC FEA application is able to use somewhere around 83 Gbps or thereabouts when doing RAMRAM transfer between my compute nodes in my micro HPC cluster.
    That roughly about where RDMA lands, in an actual RDMA-aware application.
    (Or really, it's MPI that's able to use the IB fabric.)
    A long time ago, I was able to create four 110 GB RAM disks, and then created a distributed striped Gluster volume and exported that whole thing back to the IB network as a NFSoRDMA share/export.
    But if my memory serves, that ended up being quite disappointing as well as I don't think that it even hit 40 Gbps due to the various layers of software that were running.
    3) "RoC over E" is actually just "RoCE" - RDMA over converged ethernet.
    4) If you want to try and push faster speeds, you might want to look into running SMB Direct rather than SMB multi-channel.
    5) For network data transfers, I don't use SMB for that. I mean, I can, but to get peak performance, that usually will happen between Linux systems (CentOS 7.7.1908) because one can be the server which is able to create a NFS export which has NFSoRDMA enabled whilst the linux client will be able to mount said NFS share/export with the "rdma,port=20049" option, to take advantage of NFSoRDMA.
    6) Ever since I went through my mass migration project in Jan 2023, MOST of the time now, my Mellanox MSB-7890 externally managed 36-port 100 Gbps IB switch is OFF. Since my VMs and containers run off of a single server (the result of said mass consolidation project) -- therefore; VM host communication is handled via virtio-fs (for the clients that support it) and for the clients that doesn't support virtio-fs (e.g. SLES12 SP4, out of the box) -- I use "vanilla" NFS for that.
    Again, with HDDs, it's going to be slow anyways, so even with virtio NICs, they show up as 10 Gbps NICs; except that I don't need a switch, nor any cables, or any physical NICs since all of it is handled via the virtio virtual NICs that's attached to my VMs and containers, in Proxmox.
    (NFS, BTW, also, by default, runs with 8 processes.)
    7) The NICs, cables, and switches are only cheaper in absolute $ terms, but on a $/Gbps basis -- they're actually quite expensive still.
    My Mellanox switch is capable of 7.2 Tbps full duplex switching capacity, which means that given the price that I paid for said switch at the time ~$2270 USD), I am about $0.31517/Gbps.
    Pretty much ALL 10 Gbps switch that I've looked it, would be a MINIMUM of 4x higher than that. Most of the time, it's actually closer to like 10 TIMES that price.
    i.e. if you want an 8-port 10 Gbps switch (80 Gbps switching capacity - half duplex/160 Gbps full duplex switching capacity) - that switch would have to be priced at or below $50.43 USD to be equal in cost in terms of $/Gbps as my IB switch, and 10 Gbps switches still aren't quite there yet.
    So, from that perspective, going 100 Gbps IB was actually a better deal for me.

  • @prashanthb6521
    @prashanthb6521 20 дней назад

    I frequent your channel every now and then and I like these experiments of yours. Thanks for these tests and your time.
    It would have been better if you had included latencies in these tests results. It matters much more than throughput actually. How about a FIO storage test over the network ?
    8:04 Also for random I/O w.r.t databases, I think there is no use going for higher speed cards. Most likely the latencies will be only slightly better and not a drastic improvement.

  • @nadtz
    @nadtz 11 месяцев назад +5

    I got a brocade ICX 6610 for a song and after some hunting around found out you can use the stacking 40gb ports for regular data after some tinkering. Cards and cables will cost a bit but honestly considering I got the switch for $120 to add more 10gb ports another $120 or so for 40gb cards and cables to test it with is about the price of a Mikrotik CRS309-1G-8S+in all said and done. Uses more power and makes more noise (and the cards run significantly hotter) but it will be fun to play with.

  • @LUFS
    @LUFS 11 месяцев назад

    Another great video. Thank you. I got a lot of inspiration and knowledge on this channel.

  • @jeffnew1213
    @jeffnew1213 11 месяцев назад +3

    I think 40GbE is getting pretty rare these days, replaced with 25GbE. While a 40GbE switch port breaks out into 4 x 10GbE connections, a 100GbE switch port breaks out into 4 x 25GbE connections. Switches with 25GbE ports and 100GbE ports are available at quite decent pricing for higher-end home labs. Further, the Mellanox ConnectX-3 series of cards has been deprecated or made all out incompatible with some server operating systems and hypervisors in favor of ConnectX-4 cards. I am thinking of ESXi here, which is what I run at home. I have a Ubiquity USW-Pro Aggregation switch with 4 x 25GbE SFP28 ports, each connected to a Mellanox ConnectX-4 card, two in two Synology NASes, and two more in two Dell PowerEdge servers.
    Good video. You're a good presenter!

    • @SharkBait_ZA
      @SharkBait_ZA 10 месяцев назад +1

      This. Mellanox ConnectX-4 cards with 25Gbps DAC cables are not so expensive.

  • @ericneo2
    @ericneo2 11 месяцев назад +2

    I'd be curious to see what the difference is you used ISCSI instead of SMB or if you could do FC what the RAMDISK to RAMDISK would be.

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад +2

      I didn't try iSCSI, but I did try NFS with similar performance as SMB. I might do a video in the future looking at NFS, SMB, iSCSI and more.

  • @silversword411
    @silversword411 11 месяцев назад +1

    Love to see: Lists of windows and linux commands to check smb configuration. Then server and client troubleshooting commands. Break down your process and allow repeatable testing for others! :) Great video

  • @shephusted2714
    @shephusted2714 11 месяцев назад

    really good content - a ws to a dual nas with dual port 40g is a great setup for anybody but esp small biz sector - no switch needed and you can sync nases quick - maybe in followup you can use nvme raid0 arrays and bonded 40g for effectively 80g between ws and nas - the 56g connect-x cards are like around 50 bucks so this can make for an affordable speedboost and especially good if you are moving a lot of video/vms/backups or other big data around - good to see this content and how relatively easy it is to get the most performance with smb multi chan and jumbo frames

  • @CarAudioInc
    @CarAudioInc 11 месяцев назад +1

    Great vid as always, I'm running 10gb only realistically ever gets used very rarely though.. BIt of future proofing I suppose, even though I wonder if that future will ever come lol.-

  • @michaelgleason4791
    @michaelgleason4791 4 месяца назад +1

    Doing some math when deciding what gigabit speeds made sense a few months ago, I settled on 25gbit to connect my 12700k server and 5800x desktop. Just a single DAC between them. My 12700k server has mostly spinners with simple mergerfs for storage, so 10gbit would've been fine, but there was basically no price difference between used 10 and 25 gear. Then I bought a Brocade 6610 that I still need to play with, and a dell t7820 to be my new main server (36c/72t, 128GB RAM for now). So I'm considering putting a 40gb card in it since the Brocade has 2 40gbit ports. And, of course, I will definitely need to put my spinners into a zfs setup, with probably a flash cache (with u.2 drives) in front of it. It's a great time to be buying used enterprise gear!

    • @ElectronicsWizardry
      @ElectronicsWizardry  4 месяца назад

      Totally agree with getting faster networking if you can. If only the switches were as cheap as the direct connections.

  • @idle_user
    @idle_user 10 месяцев назад +4

    My homelab has the issue with different NIC speeds. Proxmox 3-node cluster 4x2.5Gb. Personal PC 10Gbe. Synology NAS with (4x) 1Gb. I use SMB multi-channel on the NAS to get bursts of speed when it can. It helps out quite a bit.

  • @OsX86H3AvY
    @OsX86H3AvY 11 месяцев назад +1

    great vid, keep em coming! also i recently went to 10G/2.5G for my home network so i looked into 40G at the time - i didnt think i heard it but be careful about going 40G -> 4x10G as thats to my understanding only going to work for switches and I think its only for one brand (mellanox maybe? i forget)...i ended up going with an x710-da4 instead for 40GB in 4x10G without needing to worry about any issues....i think those cables were specifically made for going from a core switch to your other switches or something like that but if im wrong pls correct me, mostly i just remember it seemed like a hassle waiting to happen so i didnt go that way

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад +1

      I don't have a ton of experiences with those breakout cables, but I think your right that there pretty picky, and you can't just use them for every use case.

  • @mhos5730
    @mhos5730 11 месяцев назад

    Excellent video. Good info!

  • @OsX86H3AvY
    @OsX86H3AvY 11 месяцев назад +4

    talking SSDs reminded me too of a recent issue I had where my ZFS share would start out at like 400MBps (10G connections) but would quickly drop down BELOW GIGABIT speeds and it was 6 spinning 7200rpm drives with a 128GB SSD caches.....i mention it because REMOVING the cache actually sped it up to like 350MBps+ consistent speeds - the SSD was what was bottlenecking my rusty ZFS pool because it was trash flash as an arc cache...anyways not really related but i thought that was one of those interesting things where i had to tweak/test/tweak/test/ad infinitum as well

    • @LtdJorge
      @LtdJorge 8 месяцев назад

      Most people don’t need L2Arc and it consumes more RAM for no real benefit. If you only have one SSD for multiple HDDs, you should set it up as a special VDEV. Although having 2 mirrored is recommended, since losing data from the special VDEV destroys your pool (irrecoverably).

    • @prashanthb6521
      @prashanthb6521 20 дней назад

      This same thing happened with Dropbox. They removed cache and transfers became faster.

  • @skyhawk21
    @skyhawk21 7 месяцев назад

    I’m finally at 2.5gb but old mix of hdds on windows box using storage spaces don’t max it out.. also was going to get server box a 10gbps nic to go into switch sfp port

  • @evan7721
    @evan7721 6 месяцев назад

    when you go to try RoCE v2, what sorts of benchmarks (synthetic or "real world") against what hardware are you going to attempt? I'm interested in 40 or 50GbE based RoCE in my homelab and want to try building Ceph with support and seeing what's required for QuantumESPRESSO

  • @skyhawk21
    @skyhawk21 7 месяцев назад

    Got a sodola switch with 10gbs sfp plus ports but don’t know what cheap compatible adapter to buy and which is best to go to a future 10gbs nic at server

  • @joshhardin666
    @joshhardin666 4 месяца назад +1

    10gb is only 1.25GB/s. most of my ssd's can single-handedly do 3000mb/s constant throughput. the new 9950x machine I just built has a gen5 drive that can do a very real 14GB/s reads and 12GB/s writes. even 40gb is only 5GB/s. FWIW, I'm currently running 10 base-t in my homelab and in my house between NAS and workstations. I wish it were easier and less expensive to get faster internet, but honestly most of the time my problem is lack of pcie lanes on consumer platforms.my workstation has 2 gen5, 2 gen4 m.2 ssd's, a 16 lane gen4 gpu (4090 can actually use more bandwidth than 8 lanes provide and thus requires dedicated 16 lanes), and onboard 10g which shares bandwidth with the motherboard chipset (and thus things like high speed usb). the lack of pcie lanes on modern desktops is absolute garbage and I wish that cpu manufacturers would give us AT LEAST 16 more lanes as normal.

  • @ryanmalone2681
    @ryanmalone2681 7 месяцев назад +2

    I have 10G on my network and only saw it go above 1G once with actual usage (i.e. not testing), so I’m upgrading to 25G just because I feel like it.

    • @AM-gd7ed
      @AM-gd7ed 6 месяцев назад

      I like your plan. I am also in the same boat. After all the research I have gone through, the faster you want things, the more money and better hardware you need.

  • @insu_na
    @insu_na 11 месяцев назад +4

    my servers all have dual 40gbe nics and they're set up in a circular daisy chain (with spanning tree). it's really great for when I want to live-migrate VMs through Proxmox, because it goes *fast* even if the VMs are huge. not much other benefit unfortunately since the rest of my home is connected by 1gbe (despite my desktop PC also having a 40gbe nic that is unused due to cat🐈🐈‍⬛based vandalism)

    • @CassegrainSweden
      @CassegrainSweden 11 месяцев назад +1

      One of my cats bit of both fibers in a 10 Gbps LAG :) I wondered why the computer was off the network and after some investigating noticed bite marks on both fibers. The fibers have since been replaced with DAC cables that apparently does not taste as good.

    • @insu_na
      @insu_na 11 месяцев назад +1

      @@CassegrainSweden hehe
      The cable runs are too far for DAC for me, so I gotta make do with Cat6e, I'm afraid that if I use fiber again maybe this time when breaking the fiber they might look into it and blind themselves. Glad your cat and your DAC are doing fine!

  • @Aliamus_
    @Aliamus_ 11 месяцев назад

    Set up 10gbe (Dual XCAT 3's with a switch, XGS1210-12, in between to connect everything else, 2 sfp+ 10gb, 2 2.5gbe rj45, and 8 1gbe rj45) at home, usually get around 600MB/s when copying files, if I make a ram share and stop all dockers and vm's I get +-1.1 GB/s depending of file size, Iperf3 tells me I'm at 9.54-9.84 Gb/s.

  • @ninjulian5634
    @ninjulian5634 11 месяцев назад

    ok, maybe i'm stupid, but pcie 3x4 only allows for a theoretical maximum of 4GB/s so 32Gb/s, right? how could you max out one of the 40GB/s ports with that connection?

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад +1

      Yea your right, you can't max out 40GBe with a gen3 x4 SSD. I did most of my testing with RAM disks in this video to rule out this factor. You can get a lot more speed over 40GBe than 10GBe with these ssds though.

    • @ninjulian5634
      @ninjulian5634 11 месяцев назад

      ​@@ElectronicsWizardryoh okay, i thought i was going insane for a minute lol. but my comment was in relation to your statement around 0:45 and not ssds. great video though. :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад +1

      Oh derp. A gen 3 x4 slot and a gen 2 x8 slot would limit speeds a bit, and I unfortunately didn't do any specific testing. I should have checked a bit more, but I believe I could get close to full speed on those slower link speeds.

  • @declanmcardle
    @declanmcardle 11 месяцев назад +2

    Why not use a different protocol to SMB? NFS/FTP/SFTP/SCP? Host your own speedtest web server and then see what the network speed is?

    • @declanmcardle
      @declanmcardle 11 месяцев назад

      20 seconds later....yes, jumbo frames too...

    • @ElectronicsWizardry
      @ElectronicsWizardry  11 месяцев назад

      I use smb as my default as it’s well supported. I tried nfs as well and I got similar performance and didn’t mention it in the video. I might look into different protocols for network file copies later on.

    • @declanmcardle
      @declanmcardle 11 месяцев назад

      @@ElectronicsWizardry Also, when striping the volume, make sure you get the full 5GB/s from hdparm -t or similar and then maybe express the 3.2GB/s over the theoretical max to get a %age.

    • @declanmcardle
      @declanmcardle 11 месяцев назад

      Also, you probably know this, but use atop or glances to make sure it's not some sort of interrupt bottleneck.
      Also, again, shift H on top shows threads to avoid using htop.

  • @shephusted2714
    @shephusted2714 10 месяцев назад +1

    it makes too much sense in this day and age where networking is arguably the weak link - especially for small biz - dual nas and ws connected with 40g (no switch) makes a ton of sense, saves time and ups productivity and effciency - maybe you could make 'the ultimate nas' with an older platform in order to max out the ram - z420 may be the ticket with a lower powered cpu? could be a decent starting point for experimentation/bench lab

  • @Anonymous______________
    @Anonymous______________ 11 месяцев назад +1

    Multi-threaded or parallel IO workloads are the only way you're going to realistically saturate a 40GbE nic. Iperf3 is a very useful tool for doing line testing as it uses raw TCP performance and it can also help with identification of potential CPU bottlenecks. If you're attempting to simply test performance through NFS or SMB those protocols at significant overhead in and of themselves.
    Edit: Apparently my previous comment was removed by our RUclips overlords lol.

  • @ckckck12
    @ckckck12 7 месяцев назад

    Brosef... Love your show. How the hell are you talking about 40gbe and flash me the panties of two PS/2 plugs for kb and mouse... This is like a joke or something.... Lol

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 месяцев назад +1

      Servers love their old ports. Stuff like VGA is still standard on new servers along side multi-hundred gig networking. PS2 is basically gone on new servers, but I was testing with some older stuff.

  • @dominick253
    @dominick253 11 месяцев назад +8

    Some of us still struggling to fill a gigabit 😂😂😂

    • @MagicGumable
      @MagicGumable 11 месяцев назад +1

      this is actually true but hey, people also buy sportscars without ever driving them faster than 60mph

    • @deeeezel
      @deeeezel 11 месяцев назад

      My best switch is 10/100 😅

    • @Mr.Leeroy
      @Mr.Leeroy 11 месяцев назад

      Modern HDDs do 1.3 - 2 Gbps each though

    • @brainwater
      @brainwater 11 месяцев назад

      I finally utilized my gigabit connection fully for full backup. Scheduled it for each night, but since i set it up with rsync it now backs up 20GB+ in a minute since it doesn't have to transfer much.

    • @blueants-lq7lr
      @blueants-lq7lr 3 месяца назад

      1gb isn’t enough for me. I had to do a backup of 60GB of data. Took forever. Current using a 10gb and looking to move to 40gb now and 100gb later in the future.

  • @pepeshopping
    @pepeshopping 11 месяцев назад

    When you don’t know, it shows.
    MTU will help, but can have other issues.
    Try checking/learning about TCP window size, selective acknowledgments, fast retransmit and you won’t be as “surprised”…

  • @zyghom
    @zyghom 11 месяцев назад

    and I just upgraded my 1Gbps to 2.5Gbps... geeeeeez, 40Mbps...