To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CraftComputing/ - Plus the first 200 of you will get 20% off Brilliant’s annual premium subscription!
why does your intro have the sound of water pouring into a glass??? you would think you'd have at least used a carbonated beverage pouring sound for it... right??? You have the MIC... you have the BEVERAGE.... now fix it!!! (and lemme have the beverage when you're done plz?? ... that's the whole reason I wrote this... I wuntz bierz plz :P) :)
those mad lads at Mikrotik just sent not just one but a couple of their flagship 100gb switches plus accessories, no questions asked. Jeff truly wields the power of the gods
i Still cannot justify the 2x 2.5 giabit switches i need in my basement and office, this guy is doing 100 gig! Good for you great content! Keep it comming!!!
yeah, really is frustrating that consumer networking has been dragging ass for the better part of TWO decades now. I swear they've introduced 2.5Gbps and 5Gbps just to nickle and dime us instead of going to 10 where they can keep overcharging businesses
@@bojinglebells it hasn't, you have no idea what are you talking about. most consumers don't even use 1gbs looking at how widespread is wifi. 2,5 and 5gbs was introduced for entrerprise, to reuse cat5e cables. anyway there are already 400gb devices available or even 800g, of course not for consumers or average enterprise...
iperf3 tip: it natively supports multiple parallel streams with the -P flag, no need for multiple instances ;) great upgrade though, it gave me the itch to dust off my own 100Gbps cards I have lying around
This is the level of absurdity that I love from the channel. I was finally able to upgrade my NAS and main pc to 10 gig. Because I needed to upgrade my NAS and didn't want to deal with migrating 30TB the data over a 1g connection.
Your shirt... That little saying is grilled deep into my head. After many years of making thousands tens of thousands hundreds of thousands of patch cables over the last 20+ years.
I’d like to see RDMA in Linux and Windows through SMB Direct, as well as iSCSI and NFS. RDMA should remove all CPU bottlenecks since the transfers will not longer use traditional file stacks. Make sure when doing any tests with iSCSI to turn off synchronous writes with TrueNAS, it will allow better performance for tests-although it shouldn’t matter once you get NVMe.
Well, not quite correct... RDMA itself removes the TCP stack and goes RAM to RAM via Infiniband, like DMA does from let's say sound card to RAM, but between machines. File-systems are still involved, especially in userland program space, where fopen, read, write and fclose operations are used.
It's super driver dependent and the drivers suck ass. Tried to implement it for a fileserver project a while back but couldn't get it to not drop packets.
NVMe ROCEv2 requires converged enhanced ethernet support to avoid flow control issues else reliability issues may result - I do not think these are CEE capable.
Damn 100gig... I would only need that between my two servers... Despite I was only cnsidering a 25g upgrade... Thanks alot Jeff... My bank account is getting drained
So it's 5 months later, and I have only now picked my jaw back up off the floor after hearing you say you got those 16 Intel 100gig transceivers for 5 bucks each.
It's great to see a good use of 100gig. I upgraded to 10gig with the crs326 and crs317 switches a few years back and think I am good for the foreseeable future.
It took a long time for me to finally try out some MikroTik hardware. It was their Wireless Wire kit, which is ironic given my long-standing distaste for infrastructure wireless, but genuinely the best choice. Other than some configuration bobbles - they're the link in the middle of a double-NAT setup - it's been _very_ nice. Incredibly reliable and simple to install. I know double-NAT is bad, but I haven't been able to successfully argue for taking over the ISP-provided external router yet.
On systems with only one X16 slot, I would recommend running your 100G NIC in the X16 and your GPU in the X8 or X4 slot, the GPU might only lose a few per cent of performance, but your 100G NIC will thank you.
The way you are installing that is making my DC senses tingle. MicroTic designed this to be a back of rack device. I bet the fans are blowing the wrong way for your configuration. That's why the power and QSFP are on the same side. Edge switches have power on one side and ports on the opposite side because you generally link them to patch ports for your end devices off the front. Also MultiMode (OM4) is fine and spec'd up to 100G over 125m(400ft). The problem as you saw is the QSFP28's at that configuration want 8 or more likely 12 strand MPO and they are not cheap. I really wish you had been clearer that the switch to SingleMode infrastructure wasn't a limitations of the OM4 itself but a budget limitation of buying the hardware to allow OM4 to operate at 100Gb speeds. OM5 is the same issue, however old Datacenter infrastructure that I supported ran either 24, 48 or 96 strands to each cabinet using 1x12 MPO so even with the cost saving of SM which is a relative recent thing, it was still cheaper to buy the MM compatible 100GBSR QSFP28 modules than to replace the cable plant would be.
I bought a 2u wall mount rack and bolted it under my desk for my PDU. Works perfectly and keeps everything out of the way. Also installed some under desk pockets to hold power bricks and the like.
For file transfers and real applications (such us streaming) you would need to setup NFS over RDMA or Samba over RDMA. I don't think the bottleneck is in the SSD raid. Standard ethernet works fine for synthetic benchmarks using specialized applications (like iperf), but for getting similar speeds with network filesystems and non specialized apps you really need RDMA.
The one thing I like about Mikrotik is I don't have to fight with GBIC and SFP compatibility! Intel, 10Gtek, Nokia (FTTH SFP). It doesn't care and it just works.
Awesome video, I have a question regarding safety though: Could you please be more clear on the dangers of leaving SFP/QSFP lasers open? It can permanently blind someone, and proper precaution should be taken when handling lasers.
One thing I'd really like to see is inter-vlan routing speeds. Hardware offloading etc. There seems to be very little information around correctly setting this up without just killing your switches/routers cpu. Usually if you're geeky enough to have 10 or 100gb networking then you're going to have vlans. :-) It would help me get on the 10/100gb train!
I was going to ask how he setup inter-vlan routing with that Ubiquiti UDM-Pro. I went to 25 Gbps and I had to collapse my VLANs to put all of the 25 Gbps clients in a single VLAN if I still wanted to use the UDM-Pro. An alternative might be to use the Mirotik as the inter-vlan router but I don't know what the speeds are. The UDM-Pro just couldn't cut it for me.
When you said 100 gigabit full duplex it gave me a full duplex. I've been considering a network upgrade for a bit and I'd very much like to get 10g links in place, but 100g is almost an incomprehensible number.
That is fast. I know less than Linus about the linux universe but it is always fun to see the hardware. Also glad you mentioned the tree grinder across the street. For most of the video i was sure that you had a noisy fan near a microphone. Cheers
Good job wearing green! This comes from a Professional Brewer in Ireland who was raised in The PACNW! Happy St.Patricks day. Let me Know if you ever want to Homebrew. Super fast switch I just ordered the 570/80 you just presented.
I grew up having a BBS on 9600 baud (9.6 kbps) and the move to 19,200 baud was a giant leap. That was still all over a circuit switched telephone network, i.e. not IP networks yet. The speeds you reach in this video are like 10 million times higher... Talk about progress...
Curious what distance those single mode optics are rated for? Did you put attenuators on the (preferably) receive ends of the links? You are going to shorten the life on your optics if you are blasting 10km optic power levels over 3meter cables.
Those bottom power plugs shown at 9:59 & 13:39 aren't in all the way which is a fire hazard since it could lead to arcing so definitely get those in all the way, and you should get that dust off too so it doesn't get into the sockets. Anyway, great vid, it's always fun to sit back and watch your videos
If you want to push the network, storage, and CPU a bit, database ETL (extract, transform and load) will be a good general stress test. Not only are you capable of saturating the network link, but for how long, and for how much data that has to be process, structured, and stored in a database that can then be queried quickly. I'm not sure what's out there for "canned" large footprint ETL benchmarks though.
I do not have the kind of speed requirement, files, workloads, etc., that said it is definitely drool worthy. I agree on the ' typical ' workload usage and could do with that kind of speed and storage for my Steam Library and wonder what kind of load times you could get using a net-box on that network.
Our mikrotic gear is cheap but unfortunately buggy. Their OSPF implementation has a memory leak. CCR2004 has a few problems that severely limit bandwidth. Also ROS7 still has some blocker bugs for us.
I get about the same, ~ 35 Gbps per stream with iperf3, and about 16 Gbps with plain SMB. Using RDMA (SMB-Direct) I can achieve 40 Gbps file copies, but only in one direction. (Uploading from W11Pro4WS to Svr2022) Tried enabling PFC+DCB on my equipment, speed went down. More troubleshooting needed.
My goodness. I have a hard enough time using all of my 10GBE setup, even between servers doing large VM backups! hahahaha. Good video, looking forward to more videos on it.
a lot of networking rack enclosures don't have access in the rear so hotswapping the PSUs would be impossible. This switch is trying to target the broadest possible audience
@@marcogenovesi8570 this ^^^ a lot of places you simply can't get to the back of the gear (without disturbing a bunch of other stuff which would defeat the purpose of having the hot swap power supplies)
100Gb very cool for most businesses let alone home, but that mains lead about to pull out of the wall, can you get 90° NEMA plugs so the strain is downwards?
Was waiting for someone to review this switch in the wild. Interesting regarding the cpu bottleneck. Assuming there’s a hardware DMA workaround that doesn’t involve CPU. We’re getting close to RAM speeds let alone storage.
I would love to see a iSCSI netwoork boot comparisson between a sata ssd and nvme. In addition to that mount a GameDrive for steam and see what kind of cpu usage is generated through the streamed block storage with that much bandwidth
To avoid the scheduler on FreeBSD bouncing around on diff cores, use the command cpuset with iperf to lock down the process to a specific core. I was able to achieve much better rates when benchmarking like this.
Great video! At this point, do you really even need to copy back the footage to edit it, or can you do that directly on the remote drives (using copy-on-write so you don't overwrite the originals or have to create a copy. Deduplication might net you some interesting savings there, too). I have a funny feeling you might run into issues and bottlenecks with SMB, the File Explorer equivalent or similar. But still, that would be a use case I'd like to see. Maybe you could also try playing some disk-intensive games directly from remote storage and seeing if they remain playable? Just a "totally overkill hardware for gaming" kind of idea. And thanks for the idea of running single-mode over multi-mode if you want "cheap" future upgradability. I'd like to run some fiber lines around the house for the connection between the desktop and the NAS. I'm looking at 10Gbps right now, but it would suck to be stuck there in the future.
What MTU do you run? Jumbo frames can help with reducing the packet per second rate which can affect everything from number of interrupts the NICs have to handle which can have beneficial side effects to limit the effect of single-threaded performance on the overall benchmark results. As for testing, I would want to know the CPU usage of the tiks while running these tests, its most likely the switch chip will handle the entire data path there can be exceptions. Can you also comment to what type of SFPs you got for that price, I assume nothing past 10km stuff (and normal 1310nm). I am happy to hear that the intel SFPs work in the Melanox cards as past experience with Intel+SFPs is they don't work with anything but themselves (the tiks accept pretty much anything). We have been heavily using broadcom cards as a result unless the setup specifically calls for Intel (seems usually for their offload functionality).
I got a Celestica dx010 and can't even get to 25. Still love the overkill. I went with bcachefs for testing my storage in a tiered configuration, I don't have all that many nvme drives to put my data just there.
I really want to see how you maximize the network. I've been meaning to pick up the same router, but I'm not sure how I should be setting up NVME drives to maximize. Should I be using a 16x pcie 4.0 slot with a 4x4x4x4x PCIE bifurcation card? Striped + Mirrored? What about sata ssds, how fast can those be made in ZFS. What about tiered storage with an HDD, SSD, and NVME pool. With the stuff you're actively working on being moved to the NVME pool? (
I had a weired idea to try using high speed nvme storage with 100Gb connection as RAM on older systems (ddr2/ddr3). To finally have the opportunity to download the RAM via the Internet. But this is difficult to implement because you will need a special custom DIMMs with a 100GB network connection and software to run it.
Install very little physical RAM on your system, put your swap partition on that, and use swap-as-RAM. You used to be able to do this with GlusterFS version 3.7. Been there, done that.
And I just finally upgraded my workstation from an AMD Phenom II x6 1100T to a Haswell i7. (Yes, Haswell). My IP Camera network is Fast Ethernet based (10/100) - and has no bottlenecks. Man I wish I could justify upgrading. Oh well, that's what makes something like so fun to watch. I'm not a huge fan of Mikrotik/Router OS. Much prefer pfsense.
Ok, this is probably dumb. But I'm curious if there is any practical use for a ram drive on your server shared over the 100 Gbps network. I'm also curious how a 100 Gbps connection compares to 10 Gbps in latency. Also, regarding your future plans to use an NVME pool, I've heard that ZFS actively hinders NVME performance (according to a presentation by Allan Jude from last year), so a different file system for comparison may be interesting.
Where can I read up on standards and compatibility of SFP+, QSFP, etc? Are they intercompatible, or if you get a QSFP card, you need to get only QSFP transceivers? Do all transceivers of the same level work in all cards? I recall Cisco stuff won't play with other mfg's stuff... is that still the case? Are there other known incompatibilities within the world of this stuff? Is there a guide to buying used fast networking stuff on ebay anywhere?
*grumble* first reply got eaten by RUclips (probably the link).... You can get adapters to allow you to run SFP+ (or even SFP, since they're in the same form-factor) modules in a QSFP slot, it's a passive/physical adapter that just brings out one lane to the SFP module rather than the full four lanes (the "Q" in QSFP stands for "Quad") provided by QSFP. The biggest thing to know is that a lot of the major vendors are bastards and their equipment won't work with SFP (any flavour) modules that aren't "coded for" their equipment. i.e. there's no physical or electrical difference between a "generic" module and a "Cisco" module except the "Cisco" version has some registers set to "I work with Cisco"... Fortunately there are a number of vendors about (I linked one in the earlier message but apparently YT didn't like it...) which will supply you modules which are "coded for" any vendor you like. Mikrotik gear FWIW doesn't care, I've generic modules, "Cisco" modules and a few other "vendor" modules which have worked fine in any MikroTik kit I've shoved them into.
@@ChrisHolst Generally yes, I've not really found cards to be particularly picky about SFPs (worst case you can probably just flash them with a generic BIOS from the original manufacturer) usually tends to be networking equipment that causes more drama. I do know one person who's had issues getting some revisions of the Mellanox ConnectX-2 cards to switch to Ethernet mode but other than that they have seemed to "just work" everywhere I've used them/seen them used. I have had some very weird issues with HP branded Intel cards in the past, wherein the card would work fine in my PC but if I tried to reboot while the card was warm my machine would be held in reset (needed a literal "cold" boot), happened with two different cards of the same model, very weird, but since I switched that one over to a ConnectX-2 it's been solid (the HP cards have worked fine in every *other* machine I've put them in so I presume it was some weird quirk of my combination of hardware).
I'm a sucker for fast networking ^^ Awesome video, Jeff. Why would RAIDZ-1 be preferrable to a striped mirror (RAID10) configuration in your use case? Other than the the fact that you have more storage space. Wouldn't RAID10 be better for write speeds and potential rebuilding of the pool in a worst case scenario?
I love the t568b shirt and the Boimler shirt and I'm totally jealous of your new network setup. I can barely saturate the 10g on my home network, but my homelab storage server is a whole lot smaller than your monster.
@@Hitek146 I learned it as "white-orange" because verbally it's easier for me to differentiate. Saying "orange" twice in a row makes me lose track. Plus, I don't think the design would have worked as well with the colors all on the left-hand side of the shirt.
@@VeronicaExplains I agree that the colors all being on the left would be less visually attractive, but I think you have it backwards about the verbal repetition. Putting "white" first means that you are saying "orange" twice, which is what you say is what makes you lose track. Saying "white-orange, orange" puts the two oranges together, while saying "orange-white, orange" separates the two orange words. I always say "orange-white, orange, green-white, blue, blue-white, green, brown-white, brown", only putting the two blues together, rather than putting the two oranges and browns together. Plus, in my experience terminating old-school telephony cabling, where there can be hundreds of pairs in one bundle of many various colors, including purple, the stripe is always said last...
@Hitek146 I don't think I have it backwards from my angle, since I know what I remember in the server room (you do you though). Besides, it's a t-shirt, the design was the most important part for me.
For the nvme raid, wouldn't you just want to go for max speed? and then have it backup to a slower raid? As nothing on the fast storage should be there long term, just working space and then archive?
I have a CCR 2004-1G-12S+2XS and I would love to see how you would address the 12 SFP+ ports. If you are interested i can send you a list of all my Mikrotik equipment.
I'd love to know Mikrotik's decision behind all those power input options. Dual mains input, sure, makes sense. POE, yep, good idea at that power level, random extra DC input via terminal block connector, wait what? Please tell me theirs some crazy engineer at Mikrotik that, like me, has a garage without mains power and a dodgy solar setup, and they wanted to run some 100G setup out there directly from that! (I suspect it's more likely something to do with data centre's having a low voltage feed from UPS's / shared PSU's rather than dedicated PSU's in various boxes and all that - but it's kind of an interesting option to include when panel space wise when some other network connection might have been nice). Probably just a home lab issue, but sure would be nice to have one (or more 🙂) 10G port option for WAN/LAN inter-connect rather than having to use one of the 100G's split into 4s, and have one as the link to the regular WAN/LAN. At the price point they've got to be thinking a bunch of us crazy home lab's ( 🙂hello 🙂) are going to be buying this right. (obviously I'm hoping Mikrotik are reading this - and as someone who has multiple switches with 2 40Gbe ports on it, that's also super frustrating when just one (or two) extra 40Gbe's more would have made such a difference due to uplink switch connection usage taking them both and not allowing a server to be 40Gbe linked as well, but I digress....). For other videos - I'd like to see more details on the various optics / cable selections, having messed with 40G/10G stuff I've found the optics for SPF+/SPF28/QSFP+/others to be quite a minefield, and quite unclear how well 10/25/40/100 play together. great if you can get them at a few $, but at higher prices it's not fun to be figuring it out. I've started down the 8/12 core MPO fiber for the 40G stuff, does that work in a 100G switch? (as in, are optics sensibly priced, available and work, does 8 work or is the full 12 fiber thing needed), are there single mode QSFP's for 40G (and 10G) - what are the options for nice keystone connections (so far I've seen single/multimode only - I'm guessing size wise that's our only choice really) - I could go on. Also, great video 😊thanks. I nearly unsubscribed at the cocktail part, but then found I wasn't actually subscribed (not sure what happened there, I thought I was, maybe YT tricked me!), so I subscribed despite the cocktail....
Data centers and certain telecom cabinet installs prefer DC power, as they can convert it much more efficiently/ cheaper at scale then on each individual unit. Ironically, so.ilar to, but the exact opposite of how Google has small DC batteries installed in the tray with every server motherboard so they don't have to worry about whole data center power redundancy.
This is the patented Mikrotik shotgun approach, they want to target the biggest possible crowd and adding PoE and terminal block connectors is very simple and cheap. Adding more connectivity would mean increasing the cost significantly and it's not what they are going for with these switches. For what they are (modern low power 100gb) it's insanely cheap already. 40G is not inter-operable with 10/25/100/400G and as a technology is a dead end as everyone has jumped ship to the 10/25/100/400g train years ago (hence why the 40g stuff is cheap and plentiful on ebay)
@@hopkinssm1 Also pretty much every major networking vendor has DC input options (or in some cases as a standard feature) on their networking equipment so if you want to play in that space you need to do the same.
Is there a reason you aren't using the -P (parallel) flag on iperf3? -P 30 would run 30 parallel streams... Not sure if that changes your single thread testing though... Great video, love watching this stuff!
MikroTik hasn't touched SwitchOS for two years, so no surprise they haven't ported it over to the faster-than-10Gb switches. The latest version is 2.13. And their site is saying the CRS504 is RouterOS v7 only, so they've probably abandoned SwitchOS. Given my experience trying to use RouterOS as well, I can say that performance will fall through the floor if you try to actually use that as a router. I had that problem with the aforementioned CRS317, which is why I have an OPNsense router instead.
Neext to the person suggesting RDMA, The Nvme Pool should be a raid 10. a z1 will have over head in terms of block distribution, and be a cpu hog when writing at those speeds
I have an EPYC based proxmox server and have this same card lying around - can i do a direct attachment between this and my macbooks thunderbolt ethernet adapter (owc) without any switches - i know ill be capped at 10gigabit
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CraftComputing/ - Plus the first 200 of you will get 20% off Brilliant’s annual premium subscription!
Your shirt is brilliant!
You pronounced it correctly (not that anyone minds if you didn't, just nice to see)
why does your intro have the sound of water pouring into a glass??? you would think you'd have at least used a carbonated beverage pouring sound for it... right??? You have the MIC... you have the BEVERAGE.... now fix it!!! (and lemme have the beverage when you're done plz?? ... that's the whole reason I wrote this... I wuntz bierz plz :P) :)
Forgot to put the command in the video description - as you said at 15:45 ;)
those mad lads at Mikrotik just sent not just one but a couple of their flagship 100gb switches plus accessories, no questions asked. Jeff truly wields the power of the gods
Facts haha
i Still cannot justify the 2x 2.5 giabit switches i need in my basement and office, this guy is doing 100 gig! Good for you great content! Keep it comming!!!
I have a 36-port 100 Gbps Infiniband switch in my basement.
yeah, really is frustrating that consumer networking has been dragging ass for the better part of TWO decades now. I swear they've introduced 2.5Gbps and 5Gbps just to nickle and dime us instead of going to 10 where they can keep overcharging businesses
@@ewenchan1239 ah stop trolling
@@bojinglebells it hasn't, you have no idea what are you talking about. most consumers don't even use 1gbs looking at how widespread is wifi. 2,5 and 5gbs was introduced for entrerprise, to reuse cat5e cables.
anyway there are already 400gb devices available or even 800g, of course not for consumers or average enterprise...
@@wiziek they don't put 2.5G on consumer products so that enterprise can use them. But sure, keep being a know-it-all ass.
iperf3 tip: it natively supports multiple parallel streams with the -P flag, no need for multiple instances ;) great upgrade though, it gave me the itch to dust off my own 100Gbps cards I have lying around
THIS
This is the level of absurdity that I love from the channel.
I was finally able to upgrade my NAS and main pc to 10 gig. Because I needed to upgrade my NAS and didn't want to deal with migrating 30TB the data over a 1g connection.
Your shirt... That little saying is grilled deep into my head. After many years of making thousands tens of thousands hundreds of thousands of patch cables over the last 20+ years.
I’d like to see RDMA in Linux and Windows through SMB Direct, as well as iSCSI and NFS. RDMA should remove all CPU bottlenecks since the transfers will not longer use traditional file stacks. Make sure when doing any tests with iSCSI to turn off synchronous writes with TrueNAS, it will allow better performance for tests-although it shouldn’t matter once you get NVMe.
Well, not quite correct... RDMA itself removes the TCP stack and goes RAM to RAM via Infiniband, like DMA does from let's say sound card to RAM, but between machines. File-systems are still involved, especially in userland program space, where fopen, read, write and fclose operations are used.
It's super driver dependent and the drivers suck ass. Tried to implement it for a fileserver project a while back but couldn't get it to not drop packets.
NVMe ROCEv2 requires converged enhanced ethernet support to avoid flow control issues else reliability issues may result - I do not think these are CEE capable.
Damn 100gig... I would only need that between my two servers... Despite I was only cnsidering a 25g upgrade...
Thanks alot Jeff... My bank account is getting drained
So it's 5 months later, and I have only now picked my jaw back up off the floor after hearing you say you got those 16 Intel 100gig transceivers for 5 bucks each.
If it makes you feel any better, I still can't believe it myself.
It's great to see a good use of 100gig. I upgraded to 10gig with the crs326 and crs317 switches a few years back and think I am good for the foreseeable future.
It took a long time for me to finally try out some MikroTik hardware. It was their Wireless Wire kit, which is ironic given my long-standing distaste for infrastructure wireless, but genuinely the best choice.
Other than some configuration bobbles - they're the link in the middle of a double-NAT setup - it's been _very_ nice. Incredibly reliable and simple to install. I know double-NAT is bad, but I haven't been able to successfully argue for taking over the ISP-provided external router yet.
On systems with only one X16 slot, I would recommend running your 100G NIC in the X16 and your GPU in the X8 or X4 slot, the GPU might only lose a few per cent of performance, but your 100G NIC will thank you.
Having a personal, local fiber network in you home just sounds so cool to me lol
Great video. With 100gb you could pretty much run all the VM storage over iscsi. Would make for an interesting project.
That's what i did but with 40Gbe on my proxmox
most of the clients I work for have 10Gbit (either fiber channel or ethernet) for VM storage.
"over iscsi"
or NFS
The way you are installing that is making my DC senses tingle. MicroTic designed this to be a back of rack device. I bet the fans are blowing the wrong way for your configuration. That's why the power and QSFP are on the same side. Edge switches have power on one side and ports on the opposite side because you generally link them to patch ports for your end devices off the front.
Also MultiMode (OM4) is fine and spec'd up to 100G over 125m(400ft). The problem as you saw is the QSFP28's at that configuration want 8 or more likely 12 strand MPO and they are not cheap.
I really wish you had been clearer that the switch to SingleMode infrastructure wasn't a limitations of the OM4 itself but a budget limitation of buying the hardware to allow OM4 to operate at 100Gb speeds. OM5 is the same issue, however old Datacenter infrastructure that I supported ran either 24, 48 or 96 strands to each cabinet using 1x12 MPO so even with the cost saving of SM which is a relative recent thing, it was still cheaper to buy the MM compatible 100GBSR QSFP28 modules than to replace the cable plant would be.
I appreciate the shirt, after doing thousands of cable ends in my life thus far.
The fact that all of this is as (relatively) inexpensive as it is is freaking crazy to me
I bought a 2u wall mount rack and bolted it under my desk for my PDU. Works perfectly and keeps everything out of the way. Also installed some under desk pockets to hold power bricks and the like.
It's crazy how far transfer speeds have gone. It was only like 15 years ago we were splitting DS0's off of DS1's to backhaul voice on cell sites.
For file transfers and real applications (such us streaming) you would need to setup NFS over RDMA or Samba over RDMA. I don't think the bottleneck is in the SSD raid. Standard ethernet works fine for synthetic benchmarks using specialized applications (like iperf), but for getting similar speeds with network filesystems and non specialized apps you really need RDMA.
Remote direct memory access
The one thing I like about Mikrotik is I don't have to fight with GBIC and SFP compatibility! Intel, 10Gtek, Nokia (FTTH SFP). It doesn't care and it just works.
Awesome video, I have a question regarding safety though:
Could you please be more clear on the dangers of leaving SFP/QSFP lasers open? It can permanently blind someone, and proper precaution should be taken when handling lasers.
Love to see the Rack Studs, they're so great.
never clicked so fast on a video
One thing I'd really like to see is inter-vlan routing speeds. Hardware offloading etc.
There seems to be very little information around correctly setting this up without just killing your switches/routers cpu.
Usually if you're geeky enough to have 10 or 100gb networking then you're going to have vlans. :-)
It would help me get on the 10/100gb train!
I was going to ask how he setup inter-vlan routing with that Ubiquiti UDM-Pro. I went to 25 Gbps and I had to collapse my VLANs to put all of the 25 Gbps clients in a single VLAN if I still wanted to use the UDM-Pro. An alternative might be to use the Mirotik as the inter-vlan router but I don't know what the speeds are. The UDM-Pro just couldn't cut it for me.
When you said 100 gigabit full duplex it gave me a full duplex. I've been considering a network upgrade for a bit and I'd very much like to get 10g links in place, but 100g is almost an incomprehensible number.
If your full duplex lasts more than 4 hours, make sure to call your doctor.
That is fast. I know less than Linus about the linux universe but it is always fun to see the hardware. Also glad you mentioned the tree grinder across the street. For most of the video i was sure that you had a noisy fan near a microphone. Cheers
Good job wearing green! This comes from a Professional Brewer in Ireland who was raised in The PACNW! Happy St.Patricks day. Let me Know if you ever want to Homebrew. Super fast switch I just ordered the 570/80 you just presented.
When those lights first came on, Jeff was grinning like a kid who just had the best Xmas of his life! 😆
It was like getting my Super Nintendo all over again 😁
@@CraftComputing - There is an unmistakable look of pure joy, that can’t be faked. You had that look just then. 😆
I grew up having a BBS on 9600 baud (9.6 kbps) and the move to 19,200 baud was a giant leap. That was still all over a circuit switched telephone network, i.e. not IP networks yet.
The speeds you reach in this video are like 10 million times higher... Talk about progress...
Oh, that t-shirt. I want one!
I was thinking the same!
vkc.sh/product-tag/t568b-cheat-sheet/
Curious what distance those single mode optics are rated for? Did you put attenuators on the (preferably) receive ends of the links? You are going to shorten the life on your optics if you are blasting 10km optic power levels over 3meter cables.
I could run so many guestbooks with hundred gigabit networking.
Those bottom power plugs shown at 9:59 & 13:39 aren't in all the way which is a fire hazard since it could lead to arcing so definitely get those in all the way, and you should get that dust off too so it doesn't get into the sockets.
Anyway, great vid, it's always fun to sit back and watch your videos
I bumped them while wiggling behind the rack. I did fix them.
@@CraftComputing Good to hear 👍
I'm surprised how long it took me to understand that shirt... I always think OrangeWhite-Orange-GreenWhite-Blue.... in my head
If you want to push the network, storage, and CPU a bit, database ETL (extract, transform and load) will be a good general stress test.
Not only are you capable of saturating the network link, but for how long, and for how much data that has to be process, structured, and stored in a database that can then be queried quickly. I'm not sure what's out there for "canned" large footprint ETL benchmarks though.
I do not have the kind of speed requirement, files, workloads, etc., that said it is definitely drool worthy. I agree on the ' typical ' workload usage and could do with that kind of speed and storage for my Steam Library and wonder what kind of load times you could get using a net-box on that network.
Our mikrotic gear is cheap but unfortunately buggy. Their OSPF implementation has a memory leak. CCR2004 has a few problems that severely limit bandwidth. Also ROS7 still has some blocker bugs for us.
I get about the same, ~ 35 Gbps per stream with iperf3, and about 16 Gbps with plain SMB. Using RDMA (SMB-Direct) I can achieve 40 Gbps file copies, but only in one direction. (Uploading from W11Pro4WS to Svr2022) Tried enabling PFC+DCB on my equipment, speed went down. More troubleshooting needed.
My goodness. I have a hard enough time using all of my 10GBE setup, even between servers doing large VM backups! hahahaha. Good video, looking forward to more videos on it.
Nice network, good job!
vkc.sh/product-tag/t568b-cheat-sheet/
Looks to be from the RUclips channel "Veronica Explains" 🙂
I have been waiting for this video so much!! And it's 30 minutes! fun!
Keep creating amazing content!
What, no 400Gbps?
Baby steps.
I also had the same issue with Mellanox ConnectX2 and 4 Cards on some HPE Workstations.
Great video as allways!
But why, MikroTik, did you put the AC power inlets on the same side as the network ports?
a lot of networking rack enclosures don't have access in the rear so hotswapping the PSUs would be impossible. This switch is trying to target the broadest possible audience
some rack cannot access the back of equipment like the telecom operator use in bts
@@marcogenovesi8570 this ^^^ a lot of places you simply can't get to the back of the gear (without disturbing a bunch of other stuff which would defeat the purpose of having the hot swap power supplies)
Awesome video, Jeff. LOVE Mikrotik!
I've very happy with my MikroTik gear.
100Gb very cool for most businesses let alone home, but that mains lead about to pull out of the wall, can you get 90° NEMA plugs so the strain is downwards?
Was waiting for someone to review this switch in the wild. Interesting regarding the cpu bottleneck. Assuming there’s a hardware DMA workaround that doesn’t involve CPU. We’re getting close to RAM speeds let alone storage.
I would love to see a iSCSI netwoork boot comparisson between a sata ssd and nvme. In addition to that mount a GameDrive for steam and see what kind of cpu usage is generated through the streamed block storage with that much bandwidth
To avoid the scheduler on FreeBSD bouncing around on diff cores, use the command cpuset with iperf to lock down the process to a specific core. I was able to achieve much better rates when benchmarking like this.
Oh my gosh, love the t-shirt!!! When I first noticed it I just laughed and laughed. Mostly only network geeks will get it. Well done!
Quite outdated for *this* video though 😂
Great video! At this point, do you really even need to copy back the footage to edit it, or can you do that directly on the remote drives (using copy-on-write so you don't overwrite the originals or have to create a copy. Deduplication might net you some interesting savings there, too). I have a funny feeling you might run into issues and bottlenecks with SMB, the File Explorer equivalent or similar. But still, that would be a use case I'd like to see. Maybe you could also try playing some disk-intensive games directly from remote storage and seeing if they remain playable? Just a "totally overkill hardware for gaming" kind of idea. And thanks for the idea of running single-mode over multi-mode if you want "cheap" future upgradability. I'd like to run some fiber lines around the house for the connection between the desktop and the NAS. I'm looking at 10Gbps right now, but it would suck to be stuck there in the future.
I think the elephant in the room is how the hell you got those optics for 5$ each
Took me a minute to realize what that shirt means. I'm drinking and eating as I'm watching this and thinking about that shirt. I like it.
whata cable hell mess, can you use some red velcro please
Jeff,
In this video, you were just like a kid at Christmas, so Merry Christmas! enjoy your massive bandwidth.
Looking forward to the Truenas testing with Cache and all that 🙂
Great Scott's. That is some seriously impressive gear, thanks for sharing.
What MTU do you run? Jumbo frames can help with reducing the packet per second rate which can affect everything from number of interrupts the NICs have to handle which can have beneficial side effects to limit the effect of single-threaded performance on the overall benchmark results.
As for testing, I would want to know the CPU usage of the tiks while running these tests, its most likely the switch chip will handle the entire data path there can be exceptions. Can you also comment to what type of SFPs you got for that price, I assume nothing past 10km stuff (and normal 1310nm). I am happy to hear that the intel SFPs work in the Melanox cards as past experience with Intel+SFPs is they don't work with anything but themselves (the tiks accept pretty much anything). We have been heavily using broadcom cards as a result unless the setup specifically calls for Intel (seems usually for their offload functionality).
I got a Celestica dx010 and can't even get to 25. Still love the overkill. I went with bcachefs for testing my storage in a tiered configuration, I don't have all that many nvme drives to put my data just there.
I really want to see how you maximize the network.
I've been meaning to pick up the same router, but I'm not sure how I should be setting up NVME drives to maximize.
Should I be using a 16x pcie 4.0 slot with a 4x4x4x4x PCIE bifurcation card? Striped + Mirrored?
What about sata ssds, how fast can those be made in ZFS.
What about tiered storage with an HDD, SSD, and NVME pool. With the stuff you're actively working on being moved to the NVME pool?
(
I would like to see the peak of cloud gaming
Imagine Mikrotik being nice enough to send you two of these - we can but dream. Amazing stuff
I don't know if I've seen Jeff this giddy before!
I'm a simple man. I see 100Gb lights, I smile.
I couldn't help but spit my coffee out when you mentioned how much you paid for those optics....
Damn --- I have not been this jealous in a while lol.
That's sick tho
Youre setup gives me something to aim towards... maybe one day
at 100gig i think thats when you start to think about using DPUs in your end systems, not just NICs
This comes up on my RUclips feed the day two new 32x200Gbps switches where delivered at work 😂
I had a weired idea to try using high speed nvme storage with 100Gb connection as RAM on older systems (ddr2/ddr3). To finally have the opportunity to download the RAM via the Internet. But this is difficult to implement because you will need a special custom DIMMs with a 100GB network connection and software to run it.
Install very little physical RAM on your system, put your swap partition on that, and use swap-as-RAM.
You used to be able to do this with GlusterFS version 3.7.
Been there, done that.
@@ewenchan1239 nah, it won't be the same as purely "internet" RAM sticks. And how will you do it on a windows machine?
Nice Video! Would it be possiable to see how long it takes to do a Snapshot of a VM?
And I just finally upgraded my workstation from an AMD Phenom II x6 1100T to a Haswell i7. (Yes, Haswell). My IP Camera network is Fast Ethernet based (10/100) - and has no bottlenecks. Man I wish I could justify upgrading. Oh well, that's what makes something like so fun to watch. I'm not a huge fan of Mikrotik/Router OS. Much prefer pfsense.
FINALLY!!!!!!
you need a bidi qsfp28, and you can run 100g on your om, but cheaper to put in smf yes.
That is one snazzy shirt you got, Jeff!
is really amazing... readly proud for latvian guys... )) mikrotik well done )))
Ok, this is probably dumb. But I'm curious if there is any practical use for a ram drive on your server shared over the 100 Gbps network.
I'm also curious how a 100 Gbps connection compares to 10 Gbps in latency.
Also, regarding your future plans to use an NVME pool, I've heard that ZFS actively hinders NVME performance (according to a presentation by Allan Jude from last year), so a different file system for comparison may be interesting.
Hm, can’t you break out multiple virtual devices to spread out the interrupt load and send queues?
You made me an alcoholic. Always drinking your episodes with beer :)!
I had trouble using the connectx-4 on a board as well. It wouldn't boot with my U.2 16x bifurcated card installed. They can definitely be finicky.
Where can I read up on standards and compatibility of SFP+, QSFP, etc? Are they intercompatible, or if you get a QSFP card, you need to get only QSFP transceivers? Do all transceivers of the same level work in all cards? I recall Cisco stuff won't play with other mfg's stuff... is that still the case? Are there other known incompatibilities within the world of this stuff? Is there a guide to buying used fast networking stuff on ebay anywhere?
*grumble* first reply got eaten by RUclips (probably the link)....
You can get adapters to allow you to run SFP+ (or even SFP, since they're in the same form-factor) modules in a QSFP slot, it's a passive/physical adapter that just brings out one lane to the SFP module rather than the full four lanes (the "Q" in QSFP stands for "Quad") provided by QSFP.
The biggest thing to know is that a lot of the major vendors are bastards and their equipment won't work with SFP (any flavour) modules that aren't "coded for" their equipment. i.e. there's no physical or electrical difference between a "generic" module and a "Cisco" module except the "Cisco" version has some registers set to "I work with Cisco"... Fortunately there are a number of vendors about (I linked one in the earlier message but apparently YT didn't like it...) which will supply you modules which are "coded for" any vendor you like. Mikrotik gear FWIW doesn't care, I've generic modules, "Cisco" modules and a few other "vendor" modules which have worked fine in any MikroTik kit I've shoved them into.
@@SomeMorganSomewhere Thanks! So it looks like the Mellanox cards are as vendor agnostic as Mikrotik boxes?
@@ChrisHolst Generally yes, I've not really found cards to be particularly picky about SFPs (worst case you can probably just flash them with a generic BIOS from the original manufacturer) usually tends to be networking equipment that causes more drama.
I do know one person who's had issues getting some revisions of the Mellanox ConnectX-2 cards to switch to Ethernet mode but other than that they have seemed to "just work" everywhere I've used them/seen them used.
I have had some very weird issues with HP branded Intel cards in the past, wherein the card would work fine in my PC but if I tried to reboot while the card was warm my machine would be held in reset (needed a literal "cold" boot), happened with two different cards of the same model, very weird, but since I switched that one over to a ConnectX-2 it's been solid (the HP cards have worked fine in every *other* machine I've put them in so I presume it was some weird quirk of my combination of hardware).
I'm a sucker for fast networking ^^ Awesome video, Jeff.
Why would RAIDZ-1 be preferrable to a striped mirror (RAID10) configuration in your use case? Other than the the fact that you have more storage space.
Wouldn't RAID10 be better for write speeds and potential rebuilding of the pool in a worst case scenario?
I love the t568b shirt and the Boimler shirt and I'm totally jealous of your new network setup. I can barely saturate the 10g on my home network, but my homelab storage server is a whole lot smaller than your monster.
But don't most people say "orange-white" rather than "white-orange", etc???? Because it's orange insulation with a white stripe on it?
@@Hitek146 I learned it as "white-orange" because verbally it's easier for me to differentiate. Saying "orange" twice in a row makes me lose track. Plus, I don't think the design would have worked as well with the colors all on the left-hand side of the shirt.
@@VeronicaExplains I agree that the colors all being on the left would be less visually attractive, but I think you have it backwards about the verbal repetition. Putting "white" first means that you are saying "orange" twice, which is what you say is what makes you lose track. Saying "white-orange, orange" puts the two oranges together, while saying "orange-white, orange" separates the two orange words. I always say "orange-white, orange, green-white, blue, blue-white, green, brown-white, brown", only putting the two blues together, rather than putting the two oranges and browns together. Plus, in my experience terminating old-school telephony cabling, where there can be hundreds of pairs in one bundle of many various colors, including purple, the stripe is always said last...
@Hitek146 I don't think I have it backwards from my angle, since I know what I remember in the server room (you do you though). Besides, it's a t-shirt, the design was the most important part for me.
10:15 your sockets should be mounted 180° turned!
For the nvme raid, wouldn't you just want to go for max speed? and then have it backup to a slower raid? As nothing on the fast storage should be there long term, just working space and then archive?
That's going to be the idea.
Installing multi-mode fiber, omg what were you thinking oh the humanity
Jeff if possible make a video on how to flash the bios of the mellanox cards from infiniband to ethernet in windows 10 thanks.
Why did you try the bandwidth utility built into Mikrotik to run switch to swtich to see what it could do?
Wow. Overkill - Love it!
I have a CCR 2004-1G-12S+2XS and I would love to see how you would address the 12 SFP+ ports. If you are interested i can send you a list of all my Mikrotik equipment.
please add the command (and maybe a link to the documentation) to change the switching mode of the network cards :)
I NEED to know where you got that shirt, it would be an absolute hit at work.
Nice cable shirt!! Where did you pick that up at: I want one!
I'd love to know Mikrotik's decision behind all those power input options. Dual mains input, sure, makes sense. POE, yep, good idea at that power level, random extra DC input via terminal block connector, wait what? Please tell me theirs some crazy engineer at Mikrotik that, like me, has a garage without mains power and a dodgy solar setup, and they wanted to run some 100G setup out there directly from that! (I suspect it's more likely something to do with data centre's having a low voltage feed from UPS's / shared PSU's rather than dedicated PSU's in various boxes and all that - but it's kind of an interesting option to include when panel space wise when some other network connection might have been nice).
Probably just a home lab issue, but sure would be nice to have one (or more 🙂) 10G port option for WAN/LAN inter-connect rather than having to use one of the 100G's split into 4s, and have one as the link to the regular WAN/LAN. At the price point they've got to be thinking a bunch of us crazy home lab's ( 🙂hello 🙂) are going to be buying this right. (obviously I'm hoping Mikrotik are reading this - and as someone who has multiple switches with 2 40Gbe ports on it, that's also super frustrating when just one (or two) extra 40Gbe's more would have made such a difference due to uplink switch connection usage taking them both and not allowing a server to be 40Gbe linked as well, but I digress....).
For other videos - I'd like to see more details on the various optics / cable selections, having messed with 40G/10G stuff I've found the optics for SPF+/SPF28/QSFP+/others to be quite a minefield, and quite unclear how well 10/25/40/100 play together. great if you can get them at a few $, but at higher prices it's not fun to be figuring it out. I've started down the 8/12 core MPO fiber for the 40G stuff, does that work in a 100G switch? (as in, are optics sensibly priced, available and work, does 8 work or is the full 12 fiber thing needed), are there single mode QSFP's for 40G (and 10G) - what are the options for nice keystone connections (so far I've seen single/multimode only - I'm guessing size wise that's our only choice really) - I could go on.
Also, great video 😊thanks. I nearly unsubscribed at the cocktail part, but then found I wasn't actually subscribed (not sure what happened there, I thought I was, maybe YT tricked me!), so I subscribed despite the cocktail....
Data centers and certain telecom cabinet installs prefer DC power, as they can convert it much more efficiently/ cheaper at scale then on each individual unit.
Ironically, so.ilar to, but the exact opposite of how Google has small DC batteries installed in the tray with every server motherboard so they don't have to worry about whole data center power redundancy.
This is the patented Mikrotik shotgun approach, they want to target the biggest possible crowd and adding PoE and terminal block connectors is very simple and cheap. Adding more connectivity would mean increasing the cost significantly and it's not what they are going for with these switches. For what they are (modern low power 100gb) it's insanely cheap already.
40G is not inter-operable with 10/25/100/400G and as a technology is a dead end as everyone has jumped ship to the 10/25/100/400g train years ago (hence why the 40g stuff is cheap and plentiful on ebay)
@@hopkinssm1 Also pretty much every major networking vendor has DC input options (or in some cases as a standard feature) on their networking equipment so if you want to play in that space you need to do the same.
Is there a reason you aren't using the -P (parallel) flag on iperf3? -P 30 would run 30 parallel streams... Not sure if that changes your single thread testing though... Great video, love watching this stuff!
MikroTik hasn't touched SwitchOS for two years, so no surprise they haven't ported it over to the faster-than-10Gb switches. The latest version is 2.13. And their site is saying the CRS504 is RouterOS v7 only, so they've probably abandoned SwitchOS.
Given my experience trying to use RouterOS as well, I can say that performance will fall through the floor if you try to actually use that as a router. I had that problem with the aforementioned CRS317, which is why I have an OPNsense router instead.
Neext to the person suggesting RDMA, The Nvme Pool should be a raid 10. a z1 will have over head in terms of block distribution, and be a cpu hog when writing at those speeds
I have an EPYC based proxmox server and have this same card lying around - can i do a direct attachment between this and my macbooks thunderbolt ethernet adapter (owc) without any switches - i know ill be capped at 10gigabit
Is 100Gb finally enough to do gaming over the (local) network without noticing?
Hi
Where can i find the link for the commands you used to update the Mellanox card firmware
Thx
Putting a like on this just for the jazz montage.