this is what everybody, even red shirt jeff, needs - overall a pretty good effort - pci lanes seems to be the limiting factor overall - the ws is the weak link. this video is a good one for biz - since they don't use gpu so much they could forego the switch and just do point to point from ws to dual nas - if you must have gpu in the equation then you need a server level ws/editing box with more lanes - what i got from this video - generally good content but raid owl does have to followup on this and optimize fully so he can enjoy more speed, save time, and be more productive, he may have to goto nvme arrays also
People have been buying second hand 40gb enterprise switches for a while now. I first remember reading about that on the servethehome forums several years ago in regard to brocade and arista switches. It's 'obsolete' for the enterprise which just means you can snap them up cheap now second hand for homelabbers but you definitely need to do some research to not get a switch that sounds like a jet taking off and theres always power consumption to worry about.
They're nice if you can use them, but it gets a bit annoying because, yeah it's 40G but it's really 4x10G, as long as your tasks are multi threaded it can push it but I honestly struggle to push more than 2 saturated links and a bit of extra traffic on the other two. I'm sure some people can but it's really not as easy as you'd think. it does work nice for uplinks though still.
For the record, FS doesn't just "seem" like a good store, they are pretty much THE store for fiberoptic networking. They make EVERYTHING for fiber, and they're just about *the* go-to for third-party transceivers for the entire market. Their switches are used in enterprise as well, just at a much lower volume than their optics and accessories.
I use this switch for vSAN, breaking out to 2x 25GbE per each Dell server. Works great and haven't had any issues yet. RouterOS is very odd and definitely takes some getting used to.
9:30 I'm disappointed that the next cut after "that would be stupid, right?" wasn't with those PC parts on the table. Current tech really isn't designed for single transfer speeds at that level. That switch was probably designed to be at the core layer of a network. Still fun to play with though.
I love this video. Yes I know there isn't a ton of use for it now but I think there really could be if it was more common. I would love to see some sort of direct video output over the network solution. HDMI 2.1 is only 48Gbps and Displayport maxes out at 80Gbps. Currently remote video is handled by encoding the video signal with something like h.264, sending it on the network, and then decoding the video before presenting the picture on your display. I would love for there to be an option where we could skip those encoding/decoding steps as those add extra latency and can introduce compression artifacts, and instead just send the entire video stream on the network with some overhead left over for USB peripherals. Again, I know this is a stupid upgrade now, but I would love to see this tech better utilized in the future. We have been stuck on 1Gbps for so long that it limitation itself has impacted what we can do with the bandwidth.
There kinda is already! In professional broadcasting they use the SMPTE 2110 standard, which is essentially raw video (SDI) over IP. That stuff doesn't exactly come cheap though and you need things like PTP aware switches to support it.
@@inkprod That is awesome. I tried finding something but all I was finding was boxes that looked like they still encode and decode the signal. This looks like exactly what I want. I wonder how open the standard is. Like are we talking open like AV1, restricted but not bad like h.264, or god awful like h.265? You don't actually need that much bandwidth either. I said 80Gbps for Displayport but it really depends on your resolution and refresh rate. A 4k120 direct stream is only 26Gbps and a 4k240 is 55Gbps. Sure there is going to be some overhead for encapsulation but you still have enough for a full 40Gbps thunderbolt 4/USB4 connection right along side it. If you actually had this networked through your home you could have a single virtualization server in your network closet and simply have cheap endpoints everywhere else. Even if you were remote it wouldn't be worthless. I am a lucky one and have 10Gbps symmetrical home internet. I could use either a 1080p144 or a 1440p85 video stream with that. Sure, you are depending on your remote site also having that kind of a connection but a 1080p60 stream is only 3.2Gbps. Give it a few years and that may actually be doable.
If you make a Filesystem that host the SMB run in Async mode on the Truenas it will go way faster. However, do note that this is not advised for sensitive data, as you can lose data on power failure. But it will fly network benchmark wise.
Switches are actually a good idea. it allows more granular control. layer 2 or layer 3. Each installation has different requierments. not to mention any changes without a switch brings..... longer change windows, roll backs and other problems as well. not to mention the right switch can do routing to offload the work from a main firewall or router. better speed and versatility in the end. Switches allow better scaling and future growth with minimal cost and planning as well. in a mission critical network like 911 for example; you need complex enough to be secure and work at scale, but simple enough to maintain and have "eyes" on the complete end to end system.
LOL... Wish I had the money to spend on a rack or on the stuff you are getting... To just blow it on stuff I don't need.... Nice video... :) Had a good laugh as well (all positive tho)
I have 2 of those Mikrotik switch and an Arista 48 100G qsfp28 ports and several intel e810-cqda2 cards. Genreral desktop machines will be pcie lane constrained. I can eek out about 50G. You need something that has the full 16 lanes of gen4 pcie to get the "love". Windows has a larger perf hit than linux and wsl2 blows all the goats. 100Gbit is a beautiful thing!
What is really interesting will this mikrotik work under full load? We had few mikrotik switches and had to wait two years before mikrotik made SwOS for them. On routeros they just weren't work under the load.
we know it's a PCI bandwidth issue, but make sure you are also using iperf3 with WSL on windows since the windows compiled versions are still using a translation layer for how it's complied. @technotim just went over this in his latest Unifi video too.
100G network is one of those things that I feel is overkill for a homelab. 40G is ideal considering that you probably won't have media that goes that fast even over SMB connections. Personally, I have a 1G connection that is more than adequate for what I have. Thought about 10G which I have in the future but I am not using the bandwidth I have at the moment.
If you want better speed than samba, than maybe try NSF ? another option is iSCSI but I'm not convinced it would outperform NFS ... but hey, try and see.
I have to wonder if some of those GPUs we've been seeing with NVMe slots are a preview to seeing more types of IO bundled into the GPU package to take advantage of the over-allocation of PCIe lanes to the top x16 slot.
I'm disappointed you didn't cut to the new threadripper build after talking about it. 😂 Also yes all flash NAS is totally worth it. As long as you don't do it the dumb way mine is set up for now. 😅
I'm MAD and angry at the PCIE lane limitation. Before, we could have all the PCI goodness we wanted. Now if you don't get a server grade MB&Proc, forget your cheap homelab form your older Gaming stations in which you could add an add-in card for that stuff you wanted to do.
One of the reason why I am keeping an eye on a decent deal for used AMD Eypc server CPU paired with used SuperMicro motherboard and ram. They're getting to be cheap on e-bay.
the "problem" is the QSFP modules you are using. there are 2 kinds of 40gb modules. SR4 and BD (Bidi)... the 1st kind uses the 12 fiber connection 100gb fiber uses (MTP for multi pair). the BD one uses "nornal" 10g-20g fiber (2 pairs, normally LC terminated) that kind has 2 20gb channels. so the maximum per connection is 20gb or rather it can work at 20gb for 2 simulatenous connections totalling 40g. those 100gb cards need server airflow, install a noctua on them. they wll die. i killed my 1st 40gb card like that.
I’m curious how did you setup physically the RDMA mode. I mean, if you are passing thru that switch, it would have failed. AFAIK, the RDMA support requires all parts involved in the path like network cards, switches, routers etc to be RDMA compatible and properly configured. The Mellanox cards and the switch you are using are the same as mine and Mellanox use RoCE for RDMA which requires a bunch of features from the switch that Mikrotik doesnt have implemented on any of their products. Would love to see more details on that RDMA setup. Thanks!
use a pcie bifurcation x8/x8 since your gpu doesnt use those 16lanes (trust me, 1060 had 3 less frames on 4 lanes vs 16 on pcie gen 3, google your card and see how many lanes it actually needs). i do the same, invent reasons or hide behind semi legitimate ones but the truth is I WANT A FAST SW vs i need it
@@RaidOwlI'd be kinda surprised if it really doesn't. Usually it's automatic, without an option, if there are multiple greater-than-x4 slots on the board. If you plug ANYTHING into the second, it downgrades the first to x8. What's the model?
Ok, SMB Direct is cool. I had the thoughts of: a) It would be cool if Unraid supported this. to b) There's probably no way for Unraid to support this lol
well I work in the broadcast industry, SMB 3.0 and beyond is for sure not limited to 10G by any means. Might be Samba or whatever TrueNAS is using that causes that, not the protocol
Yeah, SMB is definitely capable. You should not be seeing a nearly 1/10th performance hit. There is something else wrong. RDMA is only going to work point-to-point in your setup since the Mikrotik has no support for it.
Is it worth it? Do you really need to ask the question? It's always worth it. I would be curious about power consumption difference between 10,25,40,100 including the nic and the switch.
Although it's very cool. I would say that it is extremely wasteful. I would rather install large capacity nvme drives on all clients and then allow some kind of caped speed (like half a gig) sync between all of them. I think it is just better way to do it. You got instant access with no network bottleneck on your local drive while any updates you make gets synced slowly over time. Centralised stuff is cool, but you will always get congestion. I think If spend like $1.2k on ssd between all systems, I would become victorious over user experience.
For those interested. To convert from bps to B/s, you divide the value by 8 (100 Gbps gives you a theoretical max speed of 12.5 GB/s). To convert from B/s to bps, you multiply by 8. 8:16 That would be 8GB/s, which would be ideal to run 4 10GbE connections, but you'd need at least x8 for 100GbE. Though you could also go for a PCIe 2.0 x16 slot, which has the same speed.
Can you get full RDMA access with a ZFS set up? I guess if so you are limited by client speed? Could you use a ram disk on client side? Or reading an NVMe raid zero array? ...hmmm RDMA wants windows on both ends? ok.... hmm. So is the transfer direct from memory to memory? so as long as your memory pool is large enough?
The biggest issue with 10+GbE is that without RDMA the CPU has to process packets, and unfortunately these Mikrotik switches don't support DCE or RoCE at all... With that you would get vastly superior performance with SMB Direct.
I think, to have proper rdma through RoCE you should have a RoCE compatible switch. Probably with Mikrotik you should go with iWarp, but not everyone network card support iWarp.
at same time , it' seems most rdma roce NIC is only proprietary OS compatible and lack of open source fabric driver support. those proprietary shit are expensive like hell
awesome video. I really like how chasing performance make us learn new things. one question, while the switch is 100gb, how does the router impact the performance of the network, aren't packets sent to the router and back or are they communicated directly between each other from the switch itself ? if directly, then how are packets filtered or firewall rules apply ? thanks.
going to be a no on the windows smb server, at least all the services i run run on linux, so a windows server would be strictly fast storage for my 1 other windows machine, soooo isn't that just a glorified DAS at that point?
Its got quint power, because reasons. Mikrotik be like I know we have dual power already but I can add 5 way power for like $1.30 extra on the bom cost, so they do it... Never change you crazy Latvians. Powering you core switch off poe.... Sure why not. Also, got a stash of 40gig cards for like $7 each myself, was shopping for 10g cards, but 40g was way cheaper so that's what I gots.
If you indeed see 100gbe speeds then fine, but by definition I would stear clear of mikrotik. Their track record on software side is tricky at best and can render the result meaningless. Does one need even 40gbe is tricky, but if one can use it then it isgood to have.
How much heat is that generating? I ran Mikrotik many years ago. Also, I dont understand home labs, are people self taught and not in the business? I would not want any of this in my house. Its too new and using too much overhead for just bragging rights. Overhead on PC's Network, costs of running including heat. If everyone loves this home lab stuff, go get a job in the sector. Less is more in every case every time, especially at home. Also, your bus wont support any of this. Unless you are running enterprise hardware computers, you are not going to see this. 10G is more than enough for a home for file transfers, and most people have an ISP with a Gig upload at max.
turn's out you don't really need attach all nvme disk on your cpu, or evan waste x4 lanes on x2 devices, there are x2 SFF-8748 SAShba IN MANY FORMS,10gbe only need g3x2 at most so ,figure that. leave x16 lane for twin port 100gbe ,swap gpu to x4or x8 is enough for server appliance. cusumer's platform are not science machine,thus pcie lane attach to CPU is at most 20s usable. you got plenty of x1 slot ,at least two of them can insert optane m10 64/32g as iops cache,they just bottleneck bandwidth ,not iops or reflect times.
Threadripper owner here: you still need RDMA to reach any meaningful file transfer speeds. Network packets are typically 1492 to 1500 bytes, or 9000-ish if using jumbo frames. That's between 10 and 66 million packets per second for 100GBE, which is a massive amount of overhead for the CPU. RDMA instead takes megabyte or even gigabyte-sized chunks of data and "packetizes" it right on the NIC instead of your CPU, and it does so at line speed with dedicated silicon. Well, assuming your RAM and PCIe bus can keep up.
iSCSI would be the most performant for him, but the downside is fixed space allocation for the share since it's not thin provisioned. alternatively he could just migrate from SMB to NFS and see a nice performance bump with very similar configuration.
I had a similar upgrade path, where I started with 3 servers connected via 3 dual port ConnectX-3 cards, but wanted to connect to some other stuff, so I found a Mellanox SX-6036 40Gbe switch on eBay for about $150 and I've been thrilled with that thing. It's not "sit next to it" quiet, but it's not even close to being the loudest thing in my rack. If you're not averse to used equipment, it's a great option.
I just never want to fool with Mikrotiks settings just to try to accomplish line speed. I'd rather spend 5k than the ~$600 this costs. Give me wirespeed no matter the config used.
8:31 time to upgrade your workstation, then!
Don’t do this to me, Jeff…
@@RaidOwl Imagine, if you buy a modern Threadripper, you'd have enough PCIe for 400 Gbps...
@@JeffGeerling no...plz...
@@RaidOwlYou know you must do it. You can always say the views will pay for it 😂
this is what everybody, even red shirt jeff, needs - overall a pretty good effort - pci lanes seems to be the limiting factor overall - the ws is the weak link. this video is a good one for biz - since they don't use gpu so much they could forego the switch and just do point to point from ws to dual nas - if you must have gpu in the equation then you need a server level ws/editing box with more lanes - what i got from this video - generally good content but raid owl does have to followup on this and optimize fully so he can enjoy more speed, save time, and be more productive, he may have to goto nvme arrays also
I now have a way to describe my hobbies, crippling inability to be content with what I have.
@TheRealClutch1010 I know what you mean, just starting to build a home lab and home nas
40G is a bit awkward right now, homelab users mostly not there yet but enterprise users are upgrading from it to 100G already.
People have been buying second hand 40gb enterprise switches for a while now. I first remember reading about that on the servethehome forums several years ago in regard to brocade and arista switches. It's 'obsolete' for the enterprise which just means you can snap them up cheap now second hand for homelabbers but you definitely need to do some research to not get a switch that sounds like a jet taking off and theres always power consumption to worry about.
Enterprise is already at 400g...
They're nice if you can use them, but it gets a bit annoying because, yeah it's 40G but it's really 4x10G, as long as your tasks are multi threaded it can push it but I honestly struggle to push more than 2 saturated links and a bit of extra traffic on the other two. I'm sure some people can but it's really not as easy as you'd think. it does work nice for uplinks though still.
@@nadtz Careful on used enterprise gear as some of them need an active license to make use of the features.
@@rezenclowd3even beyond that in labs
For the record, FS doesn't just "seem" like a good store, they are pretty much THE store for fiberoptic networking. They make EVERYTHING for fiber, and they're just about *the* go-to for third-party transceivers for the entire market. Their switches are used in enterprise as well, just at a much lower volume than their optics and accessories.
Cool! I’m not in touch with the enterprise market at all so this is good to know.
I second this. As long as you are NOT TAA locked.
10gb networking seems to be the sweet spot for home labs and your wife sounds exactly like mine when I tell her changes I made to my home lab lol.
100%
As a network engineer.... Just use the switch. The point to point to bridge is possible but it brings nothing but pain.
That "oh god" from you wife definitely had undertones of her full awareness that too much money was spent. 😂
I use this switch for vSAN, breaking out to 2x 25GbE per each Dell server. Works great and haven't had any issues yet. RouterOS is very odd and definitely takes some getting used to.
When's the threadripper video coming out?!
When AMD sponsors me
9:30 I'm disappointed that the next cut after "that would be stupid, right?" wasn't with those PC parts on the table. Current tech really isn't designed for single transfer speeds at that level. That switch was probably designed to be at the core layer of a network. Still fun to play with though.
Yeah if I really wanted to build out an expansive RDMA-enabled network that would require a different switch…or a bunch more NICs lol
By far the funniest Homelab channel on YT. Never change my Texan neighbor.
I love this video. Yes I know there isn't a ton of use for it now but I think there really could be if it was more common. I would love to see some sort of direct video output over the network solution. HDMI 2.1 is only 48Gbps and Displayport maxes out at 80Gbps. Currently remote video is handled by encoding the video signal with something like h.264, sending it on the network, and then decoding the video before presenting the picture on your display. I would love for there to be an option where we could skip those encoding/decoding steps as those add extra latency and can introduce compression artifacts, and instead just send the entire video stream on the network with some overhead left over for USB peripherals. Again, I know this is a stupid upgrade now, but I would love to see this tech better utilized in the future. We have been stuck on 1Gbps for so long that it limitation itself has impacted what we can do with the bandwidth.
There kinda is already! In professional broadcasting they use the SMPTE 2110 standard, which is essentially raw video (SDI) over IP. That stuff doesn't exactly come cheap though and you need things like PTP aware switches to support it.
@@inkprod That is awesome. I tried finding something but all I was finding was boxes that looked like they still encode and decode the signal. This looks like exactly what I want. I wonder how open the standard is. Like are we talking open like AV1, restricted but not bad like h.264, or god awful like h.265? You don't actually need that much bandwidth either. I said 80Gbps for Displayport but it really depends on your resolution and refresh rate. A 4k120 direct stream is only 26Gbps and a 4k240 is 55Gbps. Sure there is going to be some overhead for encapsulation but you still have enough for a full 40Gbps thunderbolt 4/USB4 connection right along side it. If you actually had this networked through your home you could have a single virtualization server in your network closet and simply have cheap endpoints everywhere else. Even if you were remote it wouldn't be worthless. I am a lucky one and have 10Gbps symmetrical home internet. I could use either a 1080p144 or a 1440p85 video stream with that. Sure, you are depending on your remote site also having that kind of a connection but a 1080p60 stream is only 3.2Gbps. Give it a few years and that may actually be doable.
If you make a Filesystem that host the SMB run in Async mode on the Truenas it will go way faster. However, do note that this is not advised for sensitive data, as you can lose data on power failure. But it will fly network benchmark wise.
Switches are actually a good idea. it allows more granular control. layer 2 or layer 3. Each installation has different requierments. not to mention any changes without a switch brings..... longer change windows, roll backs and other problems as well. not to mention the right switch can do routing to offload the work from a main firewall or router. better speed and versatility in the end. Switches allow better scaling and future growth with minimal cost and planning as well. in a mission critical network like 911 for example; you need complex enough to be secure and work at scale, but simple enough to maintain and have "eyes" on the complete end to end system.
LOL... Wish I had the money to spend on a rack or on the stuff you are getting... To just blow it on stuff I don't need.... Nice video... :) Had a good laugh as well (all positive tho)
"If you're still watching you're Infiniband"... I can't tell if that's a complement or not
😉
I have 2 of those Mikrotik switch and an Arista 48 100G qsfp28 ports and several intel e810-cqda2 cards.
Genreral desktop machines will be pcie lane constrained. I can eek out about 50G. You need something that has the full 16 lanes of gen4 pcie to get the "love". Windows has a larger perf hit than linux and wsl2 blows all the goats.
100Gbit is a beautiful thing!
"That would be stupid, right" seems like forshadowing to me.
What is really interesting will this mikrotik work under full load? We had few mikrotik switches and had to wait two years before mikrotik made SwOS for them. On routeros they just weren't work under the load.
I agree!
we know it's a PCI bandwidth issue, but make sure you are also using iperf3 with WSL on windows since the windows compiled versions are still using a translation layer for how it's complied. @technotim just went over this in his latest Unifi video too.
100G network is one of those things that I feel is overkill for a homelab. 40G is ideal considering that you probably won't have media that goes that fast even over SMB connections.
Personally, I have a 1G connection that is more than adequate for what I have. Thought about 10G which I have in the future but I am not using the bandwidth I have at the moment.
If you want better speed than samba, than maybe try NSF ? another option is iSCSI but I'm not convinced it would outperform NFS ... but hey, try and see.
I have to wonder if some of those GPUs we've been seeing with NVMe slots are a preview to seeing more types of IO bundled into the GPU package to take advantage of the over-allocation of PCIe lanes to the top x16 slot.
Yes, its worth an all flash server just to get that good RDMA throughput.
I'm disappointed you didn't cut to the new threadripper build after talking about it. 😂 Also yes all flash NAS is totally worth it. As long as you don't do it the dumb way mine is set up for now. 😅
I'm MAD and angry at the PCIE lane limitation. Before, we could have all the PCI goodness we wanted. Now if you don't get a server grade MB&Proc, forget your cheap homelab form your older Gaming stations in which you could add an add-in card for that stuff you wanted to do.
One of the reason why I am keeping an eye on a decent deal for used AMD Eypc server CPU paired with used SuperMicro motherboard and ram. They're getting to be cheap on e-bay.
the "problem" is the QSFP modules you are using. there are 2 kinds of 40gb modules. SR4 and BD (Bidi)... the 1st kind uses the 12 fiber connection 100gb fiber uses (MTP for multi pair). the BD one uses "nornal" 10g-20g fiber (2 pairs, normally LC terminated) that kind has 2 20gb channels. so the maximum per connection is 20gb or rather it can work at 20gb for 2 simulatenous connections totalling 40g.
those 100gb cards need server airflow, install a noctua on them. they wll die. i killed my 1st 40gb card like that.
The power supplies seem hot swappable, are they? Cheap networking gear is notorious for not having hot swappable power.
Any news on Epyc mainboards? There are interesting products like the Gigabyte ME33-AR0. AMD 8004 series comes with 96PCIe lanes...
"Hey babe, one hundred Gib a second." 😎
Slap!
"Ok? Maybe I should have said a thousand Gib???" 😏
Keep up the good work, Raid.
Can't wait to see your Threadripper update :p
I’m curious how did you setup physically the RDMA mode. I mean, if you are passing thru that switch, it would have failed. AFAIK, the RDMA support requires all parts involved in the path like network cards, switches, routers etc to be RDMA compatible and properly configured. The Mellanox cards and the switch you are using are the same as mine and Mellanox use RoCE for RDMA which requires a bunch of features from the switch that Mikrotik doesnt have implemented on any of their products. Would love to see more details on that RDMA setup. Thanks!
It was direct from each machine. Didn’t go through the switch
Mikrotik just go "how many gigs can we ram in under $700" and then this was born
How about RDMA GPU? Sort of next level for “cloud” (read home lab) gaming.
It doesnt have to run routerOS you can reboot it into SwitchOS mode. Also try MTU 9000 and research jumbo frames.
run the gpu on fewer pci lanes. Linus tested this. No performance hit on 4 lanes.
What about NFS instead of SMB? I've seen some videos recently about NFS on Windows performance is great.
use a pcie bifurcation x8/x8 since your gpu doesnt use those 16lanes (trust me, 1060 had 3 less frames on 4 lanes vs 16 on pcie gen 3, google your card and see how many lanes it actually needs). i do the same, invent reasons or hide behind semi legitimate ones but the truth is I WANT A FAST SW vs i need it
Yeah looked into this but mb doesn’t support it
pls dont take it the wrong way, i have a soho/home lab and i have 3 mikrotiks with fibre runs between rooms or DACs and a 4th one ordered. 😂
@@RaidOwlcan't you plug GPU in other slot? As even 4x gen4 is prob ok or 8x gen3
@@RaidOwlI'd be kinda surprised if it really doesn't. Usually it's automatic, without an option, if there are multiple greater-than-x4 slots on the board. If you plug ANYTHING into the second, it downgrades the first to x8. What's the model?
I just set up my 1 billion gig network, it's so dope. I can dowload the universe in 4 minutes
That’s a lot of recipes Georgio
I've been wanting to get my hands on one of these switches just because 100Gbit sounds so insane
Not sure if it's just on my end but it seems like you need some sound dampeners. I can hear a bit of room reverb in your audio.
it keeps escalating .. you need more sponsors 😂
Good video buddy cheers on the next beer
would be a cool setup for prox mox cluster
Mikrotik has been good to me with the value to price
Time for a new workstation and an all flash windows server so!
Ok, SMB Direct is cool. I had the thoughts of: a) It would be cool if Unraid supported this. to b) There's probably no way for Unraid to support this lol
well I work in the broadcast industry, SMB 3.0 and beyond is for sure not limited to 10G by any means. Might be Samba or whatever TrueNAS is using that causes that, not the protocol
Yeah, SMB is definitely capable. You should not be seeing a nearly 1/10th performance hit. There is something else wrong. RDMA is only going to work point-to-point in your setup since the Mikrotik has no support for it.
its sfp28, not sfp+ ;)
Cool , when MS borks your system with an automatic update , it will occur that much faster !!!
Turn on Jumbo frames for better speeds
Would NFS help with the performance issue?
Don’t think so
Is it worth it? Do you really need to ask the question? It's always worth it. I would be curious about power consumption difference between 10,25,40,100 including the nic and the switch.
Addiction to tech is not a joke people. Brett needs our compassion, not our derision.
I went way crazier when I got two Mellanox / Nvidia SN2410 switches… BUT I use GPU Direct at least.
Boy you WILD
@@RaidOwl Can't wait when the new 800G stuff gets tossed out at work… 48 months, I am already counting!
Although it's very cool. I would say that it is extremely wasteful. I would rather install large capacity nvme drives on all clients and then allow some kind of caped speed (like half a gig) sync between all of them. I think it is just better way to do it. You got instant access with no network bottleneck on your local drive while any updates you make gets synced slowly over time. Centralised stuff is cool, but you will always get congestion. I think If spend like $1.2k on ssd between all systems, I would become victorious over user experience.
Jokes on you ive only been a network engineer for 2 year and this is the way to go. Bigger number better person
Whew good to know!
And I'm struggling to max out my one gig connection...
Really need to upgrade my hard drives...
You should have used a 100g to 2x40g dac to save ports. You do you though. Just know, that's an option for future expansion.
I like your style. Gonna need to subscribe for more content
can you not put switch os on them?
For those interested. To convert from bps to B/s, you divide the value by 8 (100 Gbps gives you a theoretical max speed of 12.5 GB/s). To convert from B/s to bps, you multiply by 8.
8:16 That would be 8GB/s, which would be ideal to run 4 10GbE connections, but you'd need at least x8 for 100GbE. Though you could also go for a PCIe 2.0 x16 slot, which has the same speed.
At 8:16 I mentioned it’s x4 of gen3 speeds since the card is gen3. But yes x8 of pcie4 would suffice.
@@RaidOwl 8 GB/s is the max transfer speed for PCIe 3.0 x4.
Can you get full RDMA access with a ZFS set up? I guess if so you are limited by client speed? Could you use a ram disk on client side? Or reading an NVMe raid zero array? ...hmmm RDMA wants windows on both ends? ok.... hmm. So is the transfer direct from memory to memory? so as long as your memory pool is large enough?
I have an array of 10x NVMe disks in a RAIDZ2 for times like these. lol
The biggest issue with 10+GbE is that without RDMA the CPU has to process packets, and unfortunately these Mikrotik switches don't support DCE or RoCE at all... With that you would get vastly superior performance with SMB Direct.
Correct, but that FS switch does 😉
I think, to have proper rdma through RoCE you should have a RoCE compatible switch.
Probably with Mikrotik you should go with iWarp, but not everyone network card support iWarp.
Right. I just did a direct connection for testing.
at same time , it' seems most rdma roce NIC is only proprietary OS compatible and lack of open source fabric driver support. those proprietary shit are expensive like hell
love videos like this but atm just on 1gig... planning to swap to 2.5gig "soon"TM
2.5G will be nice 👍🏼
10gb second hand gear is real cheap SFP+ and ethernet end up similar but 10baseT enternet is more flexible. switch is more but cables cost less
"my homelab has 100 Gig networking now"... "oh gawd!"
When you said it would be stupid to spend about $2,000 just to run faster network tests, I thought you were about to cut to you buying the gear. 🤣🤣🤣
Couldn't do it twice in one video haha
@@RaidOwl You’re right. Save it for the next video. 😉
Can someone explain to me why these are so cheap?
awesome video. I really like how chasing performance make us learn new things. one question, while the switch is 100gb, how does the router impact the performance of the network, aren't packets sent to the router and back or are they communicated directly between each other from the switch itself ? if directly, then how are packets filtered or firewall rules apply ? thanks.
As long as they’re on the same VLAN then the switch will route by MAC address so it won’t need to go back to the router.
8:42 you've been WHAT
ONE MORE SWITCH!! NEXT ONE IS THE BEST ONE!!
top !
bro, please do a video where your wife comes on and judges your purchases - explain to her why they are required and "worth" the money.
😅😅😅
going to be a no on the windows smb server, at least all the services i run run on linux, so a windows server would be strictly fast storage for my 1 other windows machine, soooo isn't that just a glorified DAS at that point?
Linus is already did this)
Someone already commented
nothing wrong about dumb upgrades. it comes with the territory.
Its got quint power, because reasons. Mikrotik be like I know we have dual power already but I can add 5 way power for like $1.30 extra on the bom cost, so they do it... Never change you crazy Latvians. Powering you core switch off poe.... Sure why not. Also, got a stash of 40gig cards for like $7 each myself, was shopping for 10g cards, but 40g was way cheaper so that's what I gots.
threadripper next video 🤔
Possibly
Since @JeffGeerling is pushing Threadripper...shouldn't he buy it for you?
Absolutely
having this video up and ebay open at same time is dangerous
You’ve activated my trap card
me waiting someone to say "use choeazycopy"
NFS? iSCSI?
Hmm it looks like its time to upgrade your pc
Don’t tempt me…
sometimes you do it for bragging rights......
It would be stupid to build a new pc - so?
I’ve done plenty of stupid things don’t worry
So why not do one more!
If you indeed see 100gbe speeds then fine, but by definition I would stear clear of mikrotik. Their track record on software side is tricky at best and can render the result meaningless. Does one need even 40gbe is tricky, but if one can use it then it isgood to have.
How much heat is that generating? I ran Mikrotik many years ago. Also, I dont understand home labs, are people self taught and not in the business? I would not want any of this in my house. Its too new and using too much overhead for just bragging rights. Overhead on PC's Network, costs of running including heat. If everyone loves this home lab stuff, go get a job in the sector. Less is more in every case every time, especially at home. Also, your bus wont support any of this. Unless you are running enterprise hardware computers, you are not going to see this. 10G is more than enough for a home for file transfers, and most people have an ISP with a Gig upload at max.
turn's out you don't really need attach all nvme disk on your cpu, or evan waste x4 lanes on x2 devices, there are x2 SFF-8748 SAShba IN MANY FORMS,10gbe only need g3x2 at most so ,figure that. leave x16 lane for twin port 100gbe ,swap gpu to x4or x8 is enough for server appliance.
cusumer's platform are not science machine,thus pcie lane attach to CPU is at most 20s usable. you got plenty of x1 slot ,at least two of them can insert optane m10 64/32g as iops cache,they just bottleneck bandwidth ,not iops or reflect times.
looking forward to your review of the threadripper in a week ;)
Just came here to say that ...
Threadripper owner here: you still need RDMA to reach any meaningful file transfer speeds. Network packets are typically 1492 to 1500 bytes, or 9000-ish if using jumbo frames. That's between 10 and 66 million packets per second for 100GBE, which is a massive amount of overhead for the CPU. RDMA instead takes megabyte or even gigabyte-sized chunks of data and "packetizes" it right on the NIC instead of your CPU, and it does so at line speed with dedicated silicon. Well, assuming your RAM and PCIe bus can keep up.
This must fall under the category: do I need it, hell no, do I want it, hell yeah 😂
That’s 99% of my life
Yup. How I end up with most my stuff
This is @RaidOwl's YT channel slogan
the "see i fixed the saggy servers!!!" gesture was great lol
by adding in a 2x4 or something like that ..... like I did at my home lab it worked.
@@dagamore hey ive done a shelf and when th shelf didnt work i used zip ties it aint stupid if it works only if it fails
iSCSI should be the best option to improve speed for editing. And supported by both TrueNAS and Windows.
That would be interesting.
iSCSI would be the most performant for him, but the downside is fixed space allocation for the share since it's not thin provisioned.
alternatively he could just migrate from SMB to NFS and see a nice performance bump with very similar configuration.
checkout the multithreaded iperf3
Windows (eww) is never worth it..
I wish Mikrotik made a 8-Port * 100 Gig switch.
But overall, a really good 4-Port switch
They will eventually. I have several of their switches and they're great!!
I had a similar upgrade path, where I started with 3 servers connected via 3 dual port ConnectX-3 cards, but wanted to connect to some other stuff, so I found a Mellanox SX-6036 40Gbe switch on eBay for about $150 and I've been thrilled with that thing. It's not "sit next to it" quiet, but it's not even close to being the loudest thing in my rack. If you're not averse to used equipment, it's a great option.
I just never want to fool with Mikrotiks settings just to try to accomplish line speed. I'd rather spend 5k than the ~$600 this costs. Give me wirespeed no matter the config used.
I am still trying to realize 10G on my servers and you're over here with 100G lol. Great video and break-down of what was involved.