4:03 Thank you for showing the command you used, and where you have it saved. Many times in a video where someone talks about a command that's all they do. "I set up a config file to change that." and then that's all they say about it. But they never show what the file looks like leaving a Linux noob like me not learning anything at all.
You may want to check your cards for heat. Many of the cards expect a rack mount case with high airflow, I believe your card requires Air flow: 200 LFM at 55° C, which the desktop case does not provide. You can strap a fan on the heat sink to provide the necessary cooling (Noctua has 40mm or 60mm which should do the trick nicely currently waiting for 2 to come in). I have a Mellanox 100GB nic and a couple Chelsio 40GB NICs(I would go with the 100GB in the future even though my current switch only supports 40GB) though they definitely need additional airflow after 5 minutes could cook a steak on them. Mikrotik CRS326-24S+2Q+RM is a pretty nice switch to pair with them for connectivity.
@@PeterKocic i dont know his workstation specs but on any consumer system or even an older gen threadripper etc. he would run directly into bandwidth issues on the CPU/board itself. f.e. on a AM4 system you only have 4 pcie 4.0 lanes between cpu and chipset. thats ~7GB/s which needs to be shared between the NIC and the NVME SSD Card. et voila, you are left with usable 3.5GB/s. apart from that, you will never reach those speeds with a normal windows file transfer anyways. it has gotten a bit better in win11 (its not perfect) but some years ago on win10 you were actually capped at around 10gbit networking speeds for a single file transfer. i´d personally always go with 25gbit right now. its cheap, doesnt draw as much power, has smaller cables and you wont get out much more out of the network anyways unless you run some really fancy systems with a lot of I/O lanes and use iscsi instead of smb etc..
you probably want to run something in the private ip ranges on the nics, the odds of it causing an issue long term are low with just 2 ips but its not great practice
Spinning rust on zfs... don't listen to fools who blindly say nvme.... Storage layout is important... there will always be a bottleneck - it's almost certain... defeat it
@@ryanbell85 it is an option. I think its better for caching then for data i/o as you will not be able to achieve or maintain those advertised throughput speeds. Value wise IDENTICAL spinning rust (7200rpm and better)on a mirrored -not to wide vdev - and preferably SAS drives with the correct jbod controller will net you great speeds and your wallet will thank you. Of course standard disclaimer- depends on your io workload will vary the results
@@ejbully "I think its better for caching then for data i/o as you will not be able to achieve or maintain those advertised throughput speeds." Are you familiar with the iSCSI and NFS protocols? Do you have any data to backup your claim that ZFS NVME is only suitable for caching? JBODs full of SAS drives definitely have their place and you would be greatly mistaken if you think that NVME drives are only suitable for caching.
I've been running 10Gbit for everything that I could pop a 10G card into for a good number of years. The better part of a decade, actually. I started with a Netgear 8-port 10G switch. A few years ago I replaced that with an off-lease Arista Networks 48-port 10G switch (loud, hot, and power hungry). Last year, I replaced that with the new Ubiquiti 10G aggregate switch. That device has four 25G ports. I have two 12th generation Dell PowerEdge servers running ESXi and two big Synology NASes, both of which are configured to, among lots of other things, house VMs. There are about 120 VMs on the newer of the two NASes, with replicas and some related stuff on the older box. Both of the PowerEdge servers and both NASes have Mellanox 25G cards in them with OM3 fibre in-between. ESXi and Synology's DiskStation Manager both recognize the Mellanox cards out of the box. So, now, I have a mix of 1G, 10G and 25G running in the old home lab. Performance is fine and things generally run coolly. Disk latency for VMs is very low.
In this use case when working of your server, and for the according price, this is definitely a worthy upgrade. On the topic of things that people in general think are overkill, maybe a homebuilt router next that is low on energy consumption but still be able to route a Wireguard VPN at a minimum of 1Gig? As more and more people and places get a fiber connection (I know you have an awesome Ubiquity setup, but it could be a fun project with some old server gear)
Heck yeah man I’m always looking for projects. Many of the things I do aren’t necessarily the ‘best’ or even useful for most people but at least it’s fun lol.
@@RaidOwl Well, at least your video's are really inspiring and the way you explain the matter with your humor makes it easy digestible and "honest" instead of some youtubers that put some "fake-sauce" layer on their video's. Keep up the good work!!
Crazy.... literally did this a month ago using the same 40GB cards linking a TrueNAS (also version 11) and 2 Proxmox servers. It was such a long process to get mlxfwmanager working correctly and setup Proxmox with static routes between each of the servers. I didn't have to passthrough the Mellanox card in TrueNAS but I get 32.5Gb/s in ETH mode. Let me know if I can help.
@Ryan Bell : I've had this same configuration for the past 4 months. I put 40g cards in 3 servers. Downloads are not needed to configure/flash these cards. Use mstflint to flash latest firmware and mstconfig to switch modes. There are more tools in the "mst" line to do much more. I also get around 30Gb, but only directly from the host. I only get 22Gb from VM. I believe if I raise the MTU to 9000 I could get a lot more, but I'm having issues getting my switch (Cisco 3016) to pass jumbo packets.
@@trakkasure I wish I could justify getting a 40GbE switch like that! A 4-port 40GbE switch would be plenty enough for me if I could find one. I'll have to settle for my mesh network.... at least for now :) MTU at 9000 helped a bit for me.
I use Connectx-3 pro cards between linux machines and a Mikrotik CRS326-24S+4Q+RM. Could achieve transfer rates of 33-37Gb/s between the directly connected stations. Did you verify the specs of the pcie slots used ? to achieve 40Gb/s must be pcie 3.0 x8. 2.0 x8 will limit at 26Gb/s .
I run a Brocade ICX6610 as my main rack switch. I love that it supports 1Gb, 10Gb, and 40Gb all in one. I also run a Mellanox SX6036 for my 40Gb switch. It supports both Ethernet (with a license) and Infiniband through VPI mode. You can assign the ports that are Ethernet and Infiniband. Both are killer switches and I connect the SX6036 back to the Brocade via two of the 40GbE connections. Most of my machines in the rack now either support 40Gb Ethernet or 40/56Gb Infiniband. I have yet to run 40Gb lines throughout the house though. However, with 36 ports available, the sky is the limit!
I need to point out that iperf3 is single threaded while iperf is muilti-threaded which makes a difference in throughput. It's not by a wide margin but figured best way to saturate that link.
you should go to 100gbe - the procedure is mostly the same and price is not all that much more - mikrotik has nice 100g switches now also - 2023 will see more smb and soho goto 100gbe in lieu of 25/40 - you can get breakout cables that split 100 to 4 25gbe - can be a major time saver for people that move around a lot of big data and lower cluster overhead also
@@timramich 100g mikrotik switch is less than 100 bucks now - compare cost per port 2.5 vs 100g and you will see 100g is actually cheap - don't leave all that performance on the table
@@timramich i meant to say 800 - sorry - per port 100g is still a bargain when compared to 2.5 you can use 25g in breakout cables - try ebay - lots of surplus and refurb fiber cards - it is way to go for smb and if you value your time
Which edition of windows are you using? As I understand it RDMA helps speeds when getting to 10G+ on windows and is only available in enterprise edition or pro for workstation (that's why I upgrade to enterprise).
You've gotta try getting your hands on Mikrotik's flagship 100G gear for even more insanity 😜 I'm hopelessly behind as I just recently upgraded to Netgate 6100 and 10G core switch with leafs still at 2.5G (can't afford to do a complete upgrade in one go, so have to do it in stages). I plan to buy a few mini pcs with 5900hx cpu and 64GB of ram to build a microk8s kubernetes cluster - probably over proxmox cluster to make administration easier.
Frankly, since you're not doing any switching in between the devices, instead opting for direct attached fibres, I'd say go with IB instead... IB typically nets better latencies at those higher speeds, and for directly accessing, as in working of the net disk in production, this might improve the feeling of speed in a typical use. Of course, this might not change a lot if you're only uploading/downloading stuff to/from the server before working locally and uploading results back onto the storage server, as then burst throughput is what you need and IB might not be able to accomodate any increase due to medium/tech max speeds. On the other hand, SMB/CIFS can also be somewhat limiting factor in your setup as on some hardware (as in CPU-bottlenecked) switching to iSCSI could benefit you more due to less abstraction layers in between the client and disks in the storage machine...
you have two NICs per card, right? have you tried running them in parallel as a bonded NIC? In theory that should double the speed and would "only" require a second cable run. I think proxmox has an option for that in the UI, no idea how to do that on windows...
I don’t think windows consumer versions can do LAG / LACP and it usually requires a switch for true link aggregation. Also not great for single tasks, better for say two 40 gig streams rather than a single 80 gig which would still cap at 40 gig
I've got the Chelsio 40G cards with truenas. 25G SFP+ is probably a better option for home than QSFP which is x4 cabling. It all runs very hot, but if you have more than one nvme ssd, 10G won't cut it. Either get a proper server chassis or at least use something in a standard case that you can pack with fans - those SSD's don't run cool either. Don't forget you'll need to exhaust the whole thing somewhere - putting it into a cupboard will probably end badly. Also bear in mind that the transceivers are usually tailored to the kit into which they plug. You may not get a cheap cable off ebay if you don't have a common setup.
My recommendation described in 4 important things to prepare before you use 40GbE: 1) Enough PCIe lanes 2) Use a motherboard with a typical server chipset 3) Don't use an Apple MAC system 4) In windows set high priority to the background services instead of applications. Good luck!
I'm a little late to the game on this thread, but I've done something similar. In my home office, I have two Unraid servers and two Windows 11 PCs. Each of these end points have Mellanox ConnectX-3 cards installed, connected to a CentOS system acting as a router. While it works, data transfer rates are nowhere near the rated speed of the cards and DAC cables I'm using. Transferring from and to NVMe drives, I get a transfer rate of about 5Gbps. A synthetic iper3 test, Linux to Linux, shows about 25Gbps of bandwidth.
40gb requires a lot of research on your motherboards to ensure you're linking up at the highest pcie version and maximum amount of lanes while not sharing with other pcie devices in the system. I played musical cards for a few hours and managed nearly 30gbs using 5-10 year old hardware. Proxmox will limit your setup as well, need pcie passthrough to your nas VM, or nas on bare metal.
I'm slowly working on getting my 10gb setup.. but 40 being 5x faster.. hmmm... lol. jokes aside thanks for sharing, seems like i'll stick to 10gb for now. Though I have cards iwth dual 10gb, so maybe i shoudl try for 20gb setup. I know Unix/Linux/etc have such capabilitty, but Widnows 10 Pro doesn't.. any recomendations on how to link the two ports together?
I really was hoping that you found a solution for my problems. *sigh* That 10GB cap is so damn annoying. I have been trying to find a way to get it to work but it just doesn't work with vrtio for me. If you check the connection speed in terminal "sorry I forgot which command" it will show that the connection is at 40gb. But no matter what I do I can't get the virtio to run at that speed. One tip: If you want the DHCP server to give it an IP, do what I do. Bridge a regular 1gb lan with a port on the card and just use that bridge in the VM and connect your workstation to the same port. It will give you IPs for both machines from the DHCP server and you don't have to worry about the IP hassle. Of course you will be limited to the virtio 10gb but it is a piece of mind I'm taking until I can find a solution for that 40gb virtio nonsense. And please hear my advice and don't even bother trying Infiniband. Yes it is supposed to be a better implementation and runs at 56gb but don't believe anyone that says it is plug and play, IT IS NOT. Any tiny adjustment you do to the network, it won't work anymore and you have to reboot both machines. I even bought a Mellanox switch and I gotta say, it is horrible. I don't know about the modern implementations of it like on CX5 or CX6 but I don't believe it is really ready for the market as it is believed to be. Just stick to regular old Ethernet.
@@RaidOwl Please, please do yourself (and everyone else) a favor by using a proper private IP space (192.168/16, 10/8, 172.16/12). I worked at a place pre-internet days that used the SCO UNIX manual examples, which turned out to be public IP space, for all servers. Once we got internet-connected across the board it was a real pain to deal with later. The unknowing users may make the same mistake using your examples.
Budget Option : HP Connect X 3 Pro cards ( HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR ) , payed 27 Euro for the first 2 and now they ere down to 18 and i bought another 2 as spare parts , they need an Adapter from LOM to PCIe , thats why they are cheap , the Adapter costs 8 -10 Euro (PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR ) a n d you get the PRO Version of the Mellanox Card = ROCE 2.0 . Besides TrueNAS Scale supports Infiniband now and Windows 11 pro as well = you can use it , its not that much faster but the latency is way lower . Its about 1-2 GB with the 4 x 4 TB NVME Z1 array . HDDS ~ 500MB , smaller Files way less ( as usual )
I have a Windows server and a juniper EX 4300 switch that has the QSFP+ ports on that back. I have only seen them used in a stack configuration with another switch. Would I be able to buy one of these cards and used the QSFP+ ports the switch as a network interface to have 40G connection with my server? I ask cause I am not sure if these QSFP+ ports on my switch is able to be used as a normal network port like that others.
WHAT ISP COMPANY PROVIDES THAT TYPE OF SPEED?? Here i am, just a few months into StarLink, having HughesNet for 8 years. ..I get 90s MPS download reliable now and i feel like i am king! How and who provides that much speed?? wow
Network speeds are like the lift kits of an IT nerd - You're compensating the higher you go. This coming from somebody who recently went to 10G in my home. 🤓
It's sad that you go from 10G to 40G and only double your speed I am just looking into this and seems to be normal at least while using windows file copy.
Re-visiting a video I once thought I would never be able to re-visit, haha Im trying to set a Proxmox cluster with network storage, and oddly enough in 2023 40gbps stuff is almost as cheap as 10gbps stuff
@@RaidOwl i mean.. you can do that? i dont understand netwoks im more on the developer side of things, so networks are like dark magic for me :) so i am just surprised, i would expect something like router to complain or something..
I've been using these for a few years....look into running both ports on the cards, auto share RDMA/SMB .... VPI should let you set the cards for 56Gbs ethernet.... test I set up 2 ram disk 100GB and speeds were really entertaining.... Benching marking NVMe gen3 was only a tick slower or the network......
i can give you 2 tipps for youre 40gbit networkcards. 1 Use NFS for filetransfer its posibil easy to activate it in windows only the drive mounts must be every restart used as a startup thing. 2 if you realy realy realy need on youre local SMB use the Pro version of "Windows for Workstations" and use SMB Direct/Multicannel witch the cpu dosent get hit by network traffic there are some good tutorials out there even for linux.
Ironically it *used to* be even cheaper in 2017 ish… prices of used server gear has increased dramatically over the past 3 years. Look at Linus tech tips video on a similar years ago, I want to say he got his for less than half the price that they sell for now. I got some back then for like $35 a card for the same cards.
Windows network stack is absolute bs.. But with some adjustments you should be able to hit 35-37Gbit on that card. It is the same with 10gbit, by default it only gives you about 3-4gbit in windows. But you can get it to around 7-9gbit with some tuning. It is also dependent on the version of windows. Windows server is doing way better than home and pro.. And workstation is better, if you have RDMA enabled on both ends. Good places to start are: Frame size / MTU (MTU 9000 - jumbo frames is a good idea when working with big files locally) Try Turning "large send Offload" off, on some systems the feature is best left on, but on others it is a bottleneck. Also interrupt moderation is on by default. On some systems, this can be good to avoid dedicating too much priority to the network, but on a beefy system, it can often boost network performance significantly, if turned off. If If you want to see your card perform almost at full blast, just boot your PC on an Ubuntu USB, and do an iperf to the BSD nas.
It seems like the only reason ANYONE does this is because they can. Transfer a file in .01 seconds vs .04 seconds? No thanks. It’s like modding a car for more, more, more horsepower when you almost never get to put all those horses to work. I, personally, wouldn’t spend the extra money on anything above 1Gb.
@@RaidOwl I know you did and didn’t mean to be critical of you or this video. I understand and appreciate why you did it; I’m just saying that - in general - spending the money on anything above 1Gb is foolish. May as well spend it on hookers and blow…
@@wojtek-33 for home use, I agree, which is why the shift is to 2.5 g rather than 10g. However 10g or more has its place, a single HDD can typically saturate a 1 gigabit link, which should show how slow it truly is. A single SSD even a crappy sata one on a NAS could saturate 4x 1 gigabit links. So anyone wanting to host a VM on shared storage is gonna cry when they try to do it over 1 gig
People need to stop saying "research". They've been researching for hours. No you haven't. You've been studying. You didn't run real scientific experimentation with controls and variables, you read stuff online and flipped some switches. Most people have never conducted research in their lives. They study.
Have you looked into Mikrotik CRS326-24S+2Q+RM, I know it is a little on the pricey side, or if you are going to go for this then the 100Gbps with this Mikrotik CRS504-4XQ-IN just for sh!ts and giggles. :)
homelab in Houston, TX? how do handle power outages like third world countries? Hope your homelab is more of the last piece "lab" only that is fine with third world country type power reliability
4:03 Thank you for showing the command you used, and where you have it saved. Many times in a video where someone talks about a command that's all they do. "I set up a config file to change that." and then that's all they say about it. But they never show what the file looks like leaving a Linux noob like me not learning anything at all.
You may want to check your cards for heat. Many of the cards expect a rack mount case with high airflow, I believe your card requires Air flow: 200 LFM at 55° C, which the desktop case does not provide. You can strap a fan on the heat sink to provide the necessary cooling (Noctua has 40mm or 60mm which should do the trick nicely currently waiting for 2 to come in). I have a Mellanox 100GB nic and a couple Chelsio 40GB NICs(I would go with the 100GB in the future even though my current switch only supports 40GB) though they definitely need additional airflow after 5 minutes could cook a steak on them. Mikrotik CRS326-24S+2Q+RM is a pretty nice switch to pair with them for connectivity.
why is it you only get "as low" as 28 Gbps out of a 40 Gbps card? Are you experiencing the same thing with 100Gb NIC?
@@PeterKocic i dont know his workstation specs but on any consumer system or even an older gen threadripper etc. he would run directly into bandwidth issues on the CPU/board itself. f.e. on a AM4 system you only have 4 pcie 4.0 lanes between cpu and chipset. thats ~7GB/s which needs to be shared between the NIC and the NVME SSD Card. et voila, you are left with usable 3.5GB/s. apart from that, you will never reach those speeds with a normal windows file transfer anyways. it has gotten a bit better in win11 (its not perfect) but some years ago on win10 you were actually capped at around 10gbit networking speeds for a single file transfer. i´d personally always go with 25gbit right now. its cheap, doesnt draw as much power, has smaller cables and you wont get out much more out of the network anyways unless you run some really fancy systems with a lot of I/O lanes and use iscsi instead of smb etc..
you probably want to run something in the private ip ranges on the nics, the odds of it causing an issue long term are low with just 2 ips but its not great practice
I did the similar thing with 10gbps home networking but little did I know the speed was capped by my SSD
NVME drives are definitely the way to go
Spinning rust on zfs... don't listen to fools who blindly say nvme....
Storage layout is important... there will always be a bottleneck - it's almost certain... defeat it
@@ejbully Why can't NVME on ZFS be an option?
@@ryanbell85 it is an option. I think its better for caching then for data i/o as you will not be able to achieve or maintain those
advertised throughput speeds.
Value wise IDENTICAL spinning rust (7200rpm and better)on a mirrored -not to wide vdev - and preferably SAS drives with the correct jbod controller will net you great speeds and your wallet will thank you.
Of course standard disclaimer- depends on your io workload will vary the results
@@ejbully "I think its better for caching then for data i/o as you will not be able to achieve or maintain those
advertised throughput speeds." Are you familiar with the iSCSI and NFS protocols? Do you have any data to backup your claim that ZFS NVME is only suitable for caching? JBODs full of SAS drives definitely have their place and you would be greatly mistaken if you think that NVME drives are only suitable for caching.
I've been running 10Gbit for everything that I could pop a 10G card into for a good number of years. The better part of a decade, actually. I started with a Netgear 8-port 10G switch. A few years ago I replaced that with an off-lease Arista Networks 48-port 10G switch (loud, hot, and power hungry). Last year, I replaced that with the new Ubiquiti 10G aggregate switch. That device has four 25G ports.
I have two 12th generation Dell PowerEdge servers running ESXi and two big Synology NASes, both of which are configured to, among lots of other things, house VMs. There are about 120 VMs on the newer of the two NASes, with replicas and some related stuff on the older box. Both of the PowerEdge servers and both NASes have Mellanox 25G cards in them with OM3 fibre in-between. ESXi and Synology's DiskStation Manager both recognize the Mellanox cards out of the box. So, now, I have a mix of 1G, 10G and 25G running in the old home lab. Performance is fine and things generally run coolly. Disk latency for VMs is very low.
Doing shit just because you can is a perfectly valid use case. Your home lab is exactly for this kind of thought project.
In this use case when working of your server, and for the according price, this is definitely a worthy upgrade.
On the topic of things that people in general think are overkill, maybe a homebuilt router next that is low on energy consumption but still be able to route a Wireguard VPN at a minimum of 1Gig? As more and more people and places get a fiber connection
(I know you have an awesome Ubiquity setup, but it could be a fun project with some old server gear)
Heck yeah man I’m always looking for projects. Many of the things I do aren’t necessarily the ‘best’ or even useful for most people but at least it’s fun lol.
@@RaidOwl Well, at least your video's are really inspiring and the way you explain the matter with your humor makes it easy digestible and "honest" instead of some youtubers that put some "fake-sauce" layer on their video's. Keep up the good work!!
Crazy.... literally did this a month ago using the same 40GB cards linking a TrueNAS (also version 11) and 2 Proxmox servers. It was such a long process to get mlxfwmanager working correctly and setup Proxmox with static routes between each of the servers. I didn't have to passthrough the Mellanox card in TrueNAS but I get 32.5Gb/s in ETH mode. Let me know if I can help.
Essentially Proxmox itself runs off a single SATA SSD while all the VMs run through the 40Gbs network on NVME drives on TrueNAS via NFS.
Impressive, was that 32.5Gb/s in iperf or with file transfers?
@@RaidOwl It was while using iperf. I haven't tried a file transfer but KDiskMark gets 3.6GB/s read on a VM in this network over the wire.
@Ryan Bell : I've had this same configuration for the past 4 months. I put 40g cards in 3 servers. Downloads are not needed to configure/flash these cards. Use mstflint to flash latest firmware and mstconfig to switch modes. There are more tools in the "mst" line to do much more. I also get around 30Gb, but only directly from the host. I only get 22Gb from VM. I believe if I raise the MTU to 9000 I could get a lot more, but I'm having issues getting my switch (Cisco 3016) to pass jumbo packets.
@@trakkasure I wish I could justify getting a 40GbE switch like that! A 4-port 40GbE switch would be plenty enough for me if I could find one. I'll have to settle for my mesh network.... at least for now :) MTU at 9000 helped a bit for me.
I use Connectx-3 pro cards between linux machines and a Mikrotik CRS326-24S+4Q+RM.
Could achieve transfer rates of 33-37Gb/s between the directly connected stations.
Did you verify the specs of the pcie slots used ? to achieve 40Gb/s must be pcie 3.0 x8. 2.0 x8 will limit at 26Gb/s .
I run a Brocade ICX6610 as my main rack switch. I love that it supports 1Gb, 10Gb, and 40Gb all in one. I also run a Mellanox SX6036 for my 40Gb switch. It supports both Ethernet (with a license) and Infiniband through VPI mode. You can assign the ports that are Ethernet and Infiniband. Both are killer switches and I connect the SX6036 back to the Brocade via two of the 40GbE connections. Most of my machines in the rack now either support 40Gb Ethernet or 40/56Gb Infiniband. I have yet to run 40Gb lines throughout the house though. However, with 36 ports available, the sky is the limit!
do you know what the cost of the license for ethernet be?
@@DavidVincentSSM I'm not sure NVIDIA still sells the licenses to this switch, but there's good info on ServeTheHome on the SX6036.
40Gb/s is actually at least 1.4x faster than 10Gb/s
🤯🤯🤯
I need to point out that iperf3 is single threaded while iperf is muilti-threaded which makes a difference in throughput. It's not by a wide margin but figured best way to saturate that link.
you should go to 100gbe - the procedure is mostly the same and price is not all that much more - mikrotik has nice 100g switches now also - 2023 will see more smb and soho goto 100gbe in lieu of 25/40 - you can get breakout cables that split 100 to 4 25gbe - can be a major time saver for people that move around a lot of big data and lower cluster overhead also
I love Mikrotik! They have so much value and flexibility!
100 gig is too expensive yet if you want real enterprise switching.
@@timramich 100g mikrotik switch is less than 100 bucks now - compare cost per port 2.5 vs 100g and you will see 100g is actually cheap - don't leave all that performance on the table
@@shephusted2714 Less than one hundred dollars? No.
@@timramich i meant to say 800 - sorry - per port 100g is still a bargain when compared to 2.5 you can use 25g in breakout cables - try ebay - lots of surplus and refurb fiber cards - it is way to go for smb and if you value your time
Which edition of windows are you using?
As I understand it RDMA helps speeds when getting to 10G+ on windows and is only available in enterprise edition or pro for workstation (that's why I upgrade to enterprise).
You've gotta try getting your hands on Mikrotik's flagship 100G gear for even more insanity 😜 I'm hopelessly behind as I just recently upgraded to Netgate 6100 and 10G core switch with leafs still at 2.5G (can't afford to do a complete upgrade in one go, so have to do it in stages). I plan to buy a few mini pcs with 5900hx cpu and 64GB of ram to build a microk8s kubernetes cluster - probably over proxmox cluster to make administration easier.
100G, nah. Skip that and just add a 0. Go for 400G. 😜
I can't wait to try this myself. I ordered some ConnectX-3 Pro EN cards
Crontab to make the setting persistent - That is also how I keep my MSI B560M PRO set so it can wake on lan. I did a short video on it too.
Frankly, since you're not doing any switching in between the devices, instead opting for direct attached fibres, I'd say go with IB instead... IB typically nets better latencies at those higher speeds, and for directly accessing, as in working of the net disk in production, this might improve the feeling of speed in a typical use. Of course, this might not change a lot if you're only uploading/downloading stuff to/from the server before working locally and uploading results back onto the storage server, as then burst throughput is what you need and IB might not be able to accomodate any increase due to medium/tech max speeds. On the other hand, SMB/CIFS can also be somewhat limiting factor in your setup as on some hardware (as in CPU-bottlenecked) switching to iSCSI could benefit you more due to less abstraction layers in between the client and disks in the storage machine...
Do you reckon link aggregation would work in this setup?
Hey, quick question: Do you have to order the Mellanox QSFP+ cable or will the Cisco QSFP+ cable work?
Thank for making this video. Did your feet itch after being in that insulation?
Nah but I got some on my arms and that sucked
you have two NICs per card, right? have you tried running them in parallel as a bonded NIC? In theory that should double the speed and would "only" require a second cable run. I think proxmox has an option for that in the UI, no idea how to do that on windows...
Def worth looking into but that’s gonna be for future me lol
@@RaidOwl Run that second one over to my house :p
I don’t think windows consumer versions can do LAG / LACP and it usually requires a switch for true link aggregation. Also not great for single tasks, better for say two 40 gig streams rather than a single 80 gig which would still cap at 40 gig
I've got the Chelsio 40G cards with truenas. 25G SFP+ is probably a better option for home than QSFP which is x4 cabling. It all runs very hot, but if you have more than one nvme ssd, 10G won't cut it. Either get a proper server chassis or at least use something in a standard case that you can pack with fans - those SSD's don't run cool either. Don't forget you'll need to exhaust the whole thing somewhere - putting it into a cupboard will probably end badly.
Also bear in mind that the transceivers are usually tailored to the kit into which they plug. You may not get a cheap cable off ebay if you don't have a common setup.
Would like to know if you could Infiniband the cards and see what is involved with that. Totally needed out on this video.
what server rack case was that? I am in the market but I keep finding either way too expensive cases or ones that don't meet my needs.
My recommendation described in 4 important things to prepare before you use 40GbE: 1) Enough PCIe lanes 2) Use a motherboard with a typical server chipset 3) Don't use an Apple MAC system 4) In windows set high priority to the background services instead of applications. Good luck!
Oh it's all fun and games until one of those fast packets has someone's eye out!
Thanks owl! The “transceivers gonna make me act up” bit had me dying
I'm a little late to the game on this thread, but I've done something similar. In my home office, I have two Unraid servers and two Windows 11 PCs. Each of these end points have Mellanox ConnectX-3 cards installed, connected to a CentOS system acting as a router. While it works, data transfer rates are nowhere near the rated speed of the cards and DAC cables I'm using. Transferring from and to NVMe drives, I get a transfer rate of about 5Gbps. A synthetic iper3 test, Linux to Linux, shows about 25Gbps of bandwidth.
Why did you go with AOC type of cable. 10 meters is not long enough to warrant active optical cable)
Why didn't you just pop the other card into your Windows machine to change the mode permanently?
... crawling around your attic in Houston during summer... that's dedication...
I was up there for like 10 min and I was dripping by the end...crazy
Here is a good switch for 40g/10g setup Brocade ICX6610 48 port
40gb requires a lot of research on your motherboards to ensure you're linking up at the highest pcie version and maximum amount of lanes while not sharing with other pcie devices in the system. I played musical cards for a few hours and managed nearly 30gbs using 5-10 year old hardware. Proxmox will limit your setup as well, need pcie passthrough to your nas VM, or nas on bare metal.
I'm slowly working on getting my 10gb setup.. but 40 being 5x faster.. hmmm... lol. jokes aside thanks for sharing, seems like i'll stick to 10gb for now. Though I have cards iwth dual 10gb, so maybe i shoudl try for 20gb setup.
I know Unix/Linux/etc have such capabilitty, but Widnows 10 Pro doesn't.. any recomendations on how to link the two ports together?
What about thunderbolt 4 / USB4?
don't know if id do this lol... but thanks to your videos i think about networking more and more!!! keep the videos coming!!!
I always buy more hardware to justify my prior purchases.
Wouldnt SMB Multichannel also be able to accomplish these speeds?
Next video: “I did it again! 100gig baby!” Would I recommend it? NO! Lol nice vid!
Why not use NFS?
Worth a shot I guess.
But be sure to have a look at pNFS and NFS + RDMA...
I have it in my setup, like you it is nothing crazy, just host to host; namely from my TrueNAS system to the backup NAS.
What made you use 40.x.x.x instead of 10.x.x.x?
Easy to remember since it’s 40G and wanted it easily distinguishable from my regular subnet.
@@RaidOwl possibly others have mentioned this but you'd be better off using 10.40 or 172.16.40 private address range ;-)
@@kingneutron1 yeah I've since changed it
I really was hoping that you found a solution for my problems. *sigh*
That 10GB cap is so damn annoying. I have been trying to find a way to get it to work but it just doesn't work with vrtio for me. If you check the connection speed in terminal "sorry I forgot which command" it will show that the connection is at 40gb. But no matter what I do I can't get the virtio to run at that speed.
One tip: If you want the DHCP server to give it an IP, do what I do. Bridge a regular 1gb lan with a port on the card and just use that bridge in the VM and connect your workstation to the same port. It will give you IPs for both machines from the DHCP server and you don't have to worry about the IP hassle. Of course you will be limited to the virtio 10gb but it is a piece of mind I'm taking until I can find a solution for that 40gb virtio nonsense.
And please hear my advice and don't even bother trying Infiniband. Yes it is supposed to be a better implementation and runs at 56gb but don't believe anyone that says it is plug and play, IT IS NOT. Any tiny adjustment you do to the network, it won't work anymore and you have to reboot both machines. I even bought a Mellanox switch and I gotta say, it is horrible.
I don't know about the modern implementations of it like on CX5 or CX6 but I don't believe it is really ready for the market as it is believed to be. Just stick to regular old Ethernet.
Why the strange subnet of 44.0.0.x? Just why? I'm curious!
Cuz I picked a random one for the sake of the video lol. No real reason.
@@RaidOwl Please, please do yourself (and everyone else) a favor by using a proper private IP space (192.168/16, 10/8, 172.16/12). I worked at a place pre-internet days that used the SCO UNIX manual examples, which turned out to be public IP space, for all servers. Once we got internet-connected across the board it was a real pain to deal with later. The unknowing users may make the same mistake using your examples.
@@draskuul Yeah, its been updated since
The problem ist just that PCI-Express 3 x8 has only 7.8 GB Transferrate , PCI-Express 4 has the doubel - but never 20GB
You are mistaking GB with Gbps. 7.8 GB/s = 64 Gbps
any way to do this for Mac? Off an UNRAID server?
Budget Option : HP Connect X 3 Pro cards ( HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR ) , payed 27 Euro for the first 2 and now they ere down to 18 and i bought another 2 as spare parts , they need an Adapter from LOM to PCIe , thats why they are cheap , the Adapter costs 8 -10 Euro (PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR ) a n d you get the PRO Version of the Mellanox Card = ROCE 2.0 . Besides TrueNAS Scale supports Infiniband now and Windows 11 pro as well = you can use it , its not that much faster but the latency is way lower . Its about 1-2 GB with the 4 x 4 TB NVME Z1 array . HDDS ~ 500MB , smaller Files way less ( as usual )
I have a Windows server and a juniper EX 4300 switch that has the QSFP+ ports on that back. I have only seen them used in a stack configuration with another switch. Would I be able to buy one of these cards and used the QSFP+ ports the switch as a network interface to have 40G connection with my server? I ask cause I am not sure if these QSFP+ ports on my switch is able to be used as a normal network port like that others.
Crazy ro think 100Gb is becoming more common in homelabs now and 10Gb can borderline be found in the trash. 😅
WHAT ISP COMPANY PROVIDES THAT TYPE OF SPEED?? Here i am, just a few months into StarLink, having HughesNet for 8 years. ..I get 90s MPS download reliable now and i feel like i am king! How and who provides that much speed?? wow
That’s not the speed through my ISP that’s the just speed I can get from one computer to another in my LAN
try crossflash lätest firmware on both card fix some problems for me @ last. What i remeber full 40gbe single port you get only in IB mode.
Yeah IB wouldn’t play nice with Proxmox tho. Def worth looking into at some point.
you know ... there is a saying.. right?
"there is NEVER enough speed"
so... give me 40
Give me fuel
Give me fire..
ghmm
the end.
Seems like you’d get better speeds with less overhead doing thunderbolt direct-attach-storage over optical
I can't even get 10gb to work on my LAN, let alone 40gb.
Network speeds are like the lift kits of an IT nerd - You're compensating the higher you go. This coming from somebody who recently went to 10G in my home. 🤓
lol I can agree with that
V2.0 would be using SRIOV to pass through a virtual function to the VM ;)
I hope you and your servers stay nice and cool during this heatwave.
It's sad that you go from 10G to 40G and only double your speed I am just looking into this and seems to be normal at least while using windows file copy.
Def diminishing returns
40 gigE + Thunderbolt FTW!
Re-visiting a video I once thought I would never be able to re-visit, haha Im trying to set a Proxmox cluster with network storage, and oddly enough in 2023 40gbps stuff is almost as cheap as 10gbps stuff
Why are you using public ip addresses on your LAN?
Those have been changed to private since then
network speed can be limited by drive speed
How come you have 40.0.0.x addresses on your local network?
It’s my lucky number. But yeah it’s not in my subnet so I just picked something.
@@RaidOwl i mean.. you can do that? i dont understand netwoks im more on the developer side of things, so networks are like dark magic for me :) so i am just surprised, i would expect something like router to complain or something..
@@urzaaaaa Yeah it's because there is no router in that setup. It's just a direct connection between computers :)
@@RaidOwl 40 like 40Gbit... :D
I've been using these for a few years....look into running both ports on the cards, auto share RDMA/SMB .... VPI should let you set the cards for 56Gbs ethernet.... test I set up 2 ram disk 100GB and speeds were really entertaining.... Benching marking NVMe gen3 was only a tick slower or the network......
Go big or go home!
Next, I’ve buy a 100 gbe nic
That Asus Hyper card is so nice and well worth the money.
Loving it so far!
i can give you 2 tipps for youre 40gbit networkcards. 1 Use NFS for filetransfer its posibil easy to activate it in windows only the drive mounts must be every restart used as a startup thing. 2 if you realy realy realy need on youre local SMB use the Pro version of "Windows for Workstations" and use SMB Direct/Multicannel witch the cpu dosent get hit by network traffic there are some good tutorials out there even for linux.
These cards are also no longer supported in vmware.
That is awesome. Its getting so much cheaper now for 40G
Ironically it *used to* be even cheaper in 2017 ish… prices of used server gear has increased dramatically over the past 3 years. Look at Linus tech tips video on a similar years ago, I want to say he got his for less than half the price that they sell for now. I got some back then for like $35 a card for the same cards.
its great
Try Jumbo frame
Windows network stack is absolute bs.. But with some adjustments you should be able to hit 35-37Gbit on that card. It is the same with 10gbit, by default it only gives you about 3-4gbit in windows. But you can get it to around 7-9gbit with some tuning.
It is also dependent on the version of windows. Windows server is doing way better than home and pro.. And workstation is better, if you have RDMA enabled on both ends.
Good places to start are: Frame size / MTU (MTU 9000 - jumbo frames is a good idea when working with big files locally) Try Turning "large send Offload" off, on some systems the feature is best left on, but on others it is a bottleneck. Also interrupt moderation is on by default. On some systems, this can be good to avoid dedicating too much priority to the network, but on a beefy system, it can often boost network performance significantly, if turned off. If
If you want to see your card perform almost at full blast, just boot your PC on an Ubuntu USB, and do an iperf to the BSD nas.
Love your humour
Because you can.
I have Ferrari..
But, would I want you to have it??
Absolutely Not! lol
Thats kind of the tone of this vid on the receiving end.
Sound got noise , u can clear the sound.
Great content by the way :)
try iscsi performance
No gone stick to 10Gb happy with that
Smart
Good video but, please clean up to cable in your nas God please pardon him
Lmao yeahhhh I’ve been doing some upgrades so cable management will come when that’s finished
It seems like the only reason ANYONE does this is because they can. Transfer a file in .01 seconds vs .04 seconds? No thanks. It’s like modding a car for more, more, more horsepower when you almost never get to put all those horses to work. I, personally, wouldn’t spend the extra money on anything above 1Gb.
I agree and even said that this is dumb even for my use case. This belongs in enterprise solutions where you NEED that bandwidth, not in a home setup.
@@RaidOwl I know you did and didn’t mean to be critical of you or this video. I understand and appreciate why you did it; I’m just saying that - in general - spending the money on anything above 1Gb is foolish. May as well spend it on hookers and blow…
@@wojtek-33 for home use, I agree, which is why the shift is to 2.5 g rather than 10g. However 10g or more has its place, a single HDD can typically saturate a 1 gigabit link, which should show how slow it truly is. A single SSD even a crappy sata one on a NAS could saturate 4x 1 gigabit links. So anyone wanting to host a VM on shared storage is gonna cry when they try to do it over 1 gig
40Gbps looks like a 'dead end', if you look at industry projections RE port quantity sold its 10, 25, 100, 400
The dual port 40GbE cards are cheaper than 10GbE dual port cards on eBay right now. Why pay more for a point-to-point connection?
@@ryanbell85 Many times when a manufacturer declares a product 'obsolete', 'legacy' etc active driver development stops or slows down to a crawl
@@jfkastner most home labs are full of unsupported, legacy, and second-hand equipment. It's just part of the fun to figure it out and stay on budget.
My goal is 100GB...because why not and its cheap (I use Mikrotik).
Take my advice, I'm not using it.
You don't need 40gig in your home lab. Show that you can saturate the 10g.
I agree. That was the whole point of the video lol
Did you actually watch the video?
@Raid Owl Exactly kinda of my point, i could have worded it better. You could do a 10g video and show that 40 isn't needed too..
10g would have cost more.
@@ryanbell85 2 10g cards 50$ 1 DAC cable 20$
People need to stop saying "research". They've been researching for hours. No you haven't. You've been studying. You didn't run real scientific experimentation with controls and variables, you read stuff online and flipped some switches. Most people have never conducted research in their lives. They study.
I used the scientific method. I also had a lab coat on…and nothing else 😉
@@RaidOwl i heavily respect this reply 🤣
Based
Tru
Have you looked into Mikrotik CRS326-24S+2Q+RM, I know it is a little on the pricey side, or if you are going to go for this then the 100Gbps with this Mikrotik CRS504-4XQ-IN just for sh!ts and giggles. :)
I would change that 40.x.x.x network into something under the rfc1918 private address space!
Good call
calling yourself a tech youtuber and are COMPLETELY clueless about infiniband. LOL
homelab in Houston, TX? how do handle power outages like third world countries? Hope your homelab is more of the last piece "lab" only that is fine with third world country type power reliability