ONE HUNDRED GIGABIT - MikroTik CRS504-4XQ-I9

Поделиться
HTML-код
  • Опубликовано: 20 окт 2024

Комментарии • 292

  • @CraftComputing
    @CraftComputing  Год назад +7

    To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/CraftComputing/ - Plus the first 200 of you will get 20% off Brilliant’s annual premium subscription!

    • @rushunt2131
      @rushunt2131 Год назад

      Your shirt is brilliant!

    • @amateurwizard
      @amateurwizard Год назад

      You pronounced it correctly (not that anyone minds if you didn't, just nice to see)

    • @Nobe_Oddy
      @Nobe_Oddy Год назад

      why does your intro have the sound of water pouring into a glass??? you would think you'd have at least used a carbonated beverage pouring sound for it... right??? You have the MIC... you have the BEVERAGE.... now fix it!!! (and lemme have the beverage when you're done plz?? ... that's the whole reason I wrote this... I wuntz bierz plz :P) :)

    • @zachsoanes6417
      @zachsoanes6417 Год назад +4

      Forgot to put the command in the video description - as you said at 15:45 ;)

  • @marcogenovesi8570
    @marcogenovesi8570 Год назад +136

    those mad lads at Mikrotik just sent not just one but a couple of their flagship 100gb switches plus accessories, no questions asked. Jeff truly wields the power of the gods

  • @postnick
    @postnick Год назад +62

    i Still cannot justify the 2x 2.5 giabit switches i need in my basement and office, this guy is doing 100 gig! Good for you great content! Keep it comming!!!

    • @ewenchan1239
      @ewenchan1239 Год назад +5

      I have a 36-port 100 Gbps Infiniband switch in my basement.

    • @bojinglebells
      @bojinglebells Год назад +19

      yeah, really is frustrating that consumer networking has been dragging ass for the better part of TWO decades now. I swear they've introduced 2.5Gbps and 5Gbps just to nickle and dime us instead of going to 10 where they can keep overcharging businesses

    • @wiziek
      @wiziek Год назад +2

      @@ewenchan1239 ah stop trolling

    • @wiziek
      @wiziek Год назад +8

      @@bojinglebells it hasn't, you have no idea what are you talking about. most consumers don't even use 1gbs looking at how widespread is wifi. 2,5 and 5gbs was introduced for entrerprise, to reuse cat5e cables.
      anyway there are already 400gb devices available or even 800g, of course not for consumers or average enterprise...

    • @bojinglebells
      @bojinglebells Год назад

      ​@@wiziek they don't put 2.5G on consumer products so that enterprise can use them. But sure, keep being a know-it-all ass.

  • @Tuetuopay
    @Tuetuopay Год назад +40

    iperf3 tip: it natively supports multiple parallel streams with the -P flag, no need for multiple instances ;) great upgrade though, it gave me the itch to dust off my own 100Gbps cards I have lying around

  • @BeeWhere
    @BeeWhere Год назад +8

    This is the level of absurdity that I love from the channel.
    I was finally able to upgrade my NAS and main pc to 10 gig. Because I needed to upgrade my NAS and didn't want to deal with migrating 30TB the data over a 1g connection.

  • @MikeHarris1984
    @MikeHarris1984 Год назад +6

    Your shirt... That little saying is grilled deep into my head. After many years of making thousands tens of thousands hundreds of thousands of patch cables over the last 20+ years.

  • @paulbrooks4395
    @paulbrooks4395 Год назад +54

    I’d like to see RDMA in Linux and Windows through SMB Direct, as well as iSCSI and NFS. RDMA should remove all CPU bottlenecks since the transfers will not longer use traditional file stacks. Make sure when doing any tests with iSCSI to turn off synchronous writes with TrueNAS, it will allow better performance for tests-although it shouldn’t matter once you get NVMe.

    • @ivanmaglica264
      @ivanmaglica264 Год назад +2

      Well, not quite correct... RDMA itself removes the TCP stack and goes RAM to RAM via Infiniband, like DMA does from let's say sound card to RAM, but between machines. File-systems are still involved, especially in userland program space, where fopen, read, write and fclose operations are used.

    • @AegisHyperon
      @AegisHyperon Год назад +2

      It's super driver dependent and the drivers suck ass. Tried to implement it for a fileserver project a while back but couldn't get it to not drop packets.

    • @rayl6599
      @rayl6599 Год назад +1

      NVMe ROCEv2 requires converged enhanced ethernet support to avoid flow control issues else reliability issues may result - I do not think these are CEE capable.

  • @ZaPirate
    @ZaPirate Год назад +104

    Great video. With 100gb you could pretty much run all the VM storage over iscsi. Would make for an interesting project.

    • @llortaton2834
      @llortaton2834 Год назад +7

      That's what i did but with 40Gbe on my proxmox

    • @marcogenovesi8570
      @marcogenovesi8570 Год назад +7

      most of the clients I work for have 10Gbit (either fiber channel or ethernet) for VM storage.

    • @ewenchan1239
      @ewenchan1239 Год назад +7

      "over iscsi"
      or NFS

    • @RowanHawkins
      @RowanHawkins Год назад +3

      The way you are installing that is making my DC senses tingle. MicroTic designed this to be a back of rack device. I bet the fans are blowing the wrong way for your configuration. That's why the power and QSFP are on the same side. Edge switches have power on one side and ports on the opposite side because you generally link them to patch ports for your end devices off the front.
      Also MultiMode (OM4) is fine and spec'd up to 100G over 125m(400ft). The problem as you saw is the QSFP28's at that configuration want 8 or more likely 12 strand MPO and they are not cheap.
      I really wish you had been clearer that the switch to SingleMode infrastructure wasn't a limitations of the OM4 itself but a budget limitation of buying the hardware to allow OM4 to operate at 100Gb speeds. OM5 is the same issue, however old Datacenter infrastructure that I supported ran either 24, 48 or 96 strands to each cabinet using 1x12 MPO so even with the cost saving of SM which is a relative recent thing, it was still cheaper to buy the MM compatible 100GBSR QSFP28 modules than to replace the cable plant would be.

  • @RobertRidleyE
    @RobertRidleyE Год назад +5

    It's great to see a good use of 100gig. I upgraded to 10gig with the crs326 and crs317 switches a few years back and think I am good for the foreseeable future.

  • @michaelmcinerney2853
    @michaelmcinerney2853 Год назад +4

    So it's 5 months later, and I have only now picked my jaw back up off the floor after hearing you say you got those 16 Intel 100gig transceivers for 5 bucks each.

    • @CraftComputing
      @CraftComputing  Год назад +2

      If it makes you feel any better, I still can't believe it myself.

  • @Lisa_Minci96
    @Lisa_Minci96 Год назад +5

    Having a personal, local fiber network in you home just sounds so cool to me lol

  • @dezznuzzinyomouth2543
    @dezznuzzinyomouth2543 Год назад +2

    Damn 100gig... I would only need that between my two servers... Despite I was only cnsidering a 25g upgrade...
    Thanks alot Jeff... My bank account is getting drained

  • @jannikmeissner
    @jannikmeissner Год назад +8

    On systems with only one X16 slot, I would recommend running your 100G NIC in the X16 and your GPU in the X8 or X4 slot, the GPU might only lose a few per cent of performance, but your 100G NIC will thank you.

  • @gearboxworks
    @gearboxworks Год назад +3

    When those lights first came on, Jeff was grinning like a kid who just had the best Xmas of his life! 😆

    • @CraftComputing
      @CraftComputing  Год назад +3

      It was like getting my Super Nintendo all over again 😁

    • @gearboxworks
      @gearboxworks Год назад

      @@CraftComputing - There is an unmistakable look of pure joy, that can’t be faked. You had that look just then. 😆

  • @MrBreadoflife
    @MrBreadoflife Год назад +5

    I appreciate the shirt, after doing thousands of cable ends in my life thus far.

  • @Azlehria
    @Azlehria Год назад +4

    It took a long time for me to finally try out some MikroTik hardware. It was their Wireless Wire kit, which is ironic given my long-standing distaste for infrastructure wireless, but genuinely the best choice.
    Other than some configuration bobbles - they're the link in the middle of a double-NAT setup - it's been _very_ nice. Incredibly reliable and simple to install. I know double-NAT is bad, but I haven't been able to successfully argue for taking over the ISP-provided external router yet.

  • @ATGEnki
    @ATGEnki Год назад +4

    I bought a 2u wall mount rack and bolted it under my desk for my PDU. Works perfectly and keeps everything out of the way. Also installed some under desk pockets to hold power bricks and the like.

  • @javiej
    @javiej Год назад +19

    For file transfers and real applications (such us streaming) you would need to setup NFS over RDMA or Samba over RDMA. I don't think the bottleneck is in the SSD raid. Standard ethernet works fine for synthetic benchmarks using specialized applications (like iperf), but for getting similar speeds with network filesystems and non specialized apps you really need RDMA.

    • @jyvben1520
      @jyvben1520 Год назад +4

      Remote direct memory access

  • @PacketWrangler
    @PacketWrangler Год назад +4

    It's crazy how far transfer speeds have gone. It was only like 15 years ago we were splitting DS0's off of DS1's to backhaul voice on cell sites.

  • @RubyRoks
    @RubyRoks Год назад +16

    The fact that all of this is as (relatively) inexpensive as it is is freaking crazy to me

  • @breadworkshop
    @breadworkshop Год назад +2

    Love to see the Rack Studs, they're so great.

  • @tropmonky
    @tropmonky Год назад +1

    My goodness. I have a hard enough time using all of my 10GBE setup, even between servers doing large VM backups! hahahaha. Good video, looking forward to more videos on it.

  • @davidbango7404
    @davidbango7404 Год назад

    Good job wearing green! This comes from a Professional Brewer in Ireland who was raised in The PACNW! Happy St.Patricks day. Let me Know if you ever want to Homebrew. Super fast switch I just ordered the 570/80 you just presented.

  • @Superminaren
    @Superminaren Год назад +2

    Awesome video, I have a question regarding safety though:
    Could you please be more clear on the dangers of leaving SFP/QSFP lasers open? It can permanently blind someone, and proper precaution should be taken when handling lasers.

  • @josemachado7830
    @josemachado7830 Год назад +4

    Oh, that t-shirt. I want one!

    • @Nealio6s
      @Nealio6s Год назад

      I was thinking the same!

    • @CraftComputing
      @CraftComputing  Год назад +5

      vkc.sh/product-tag/t568b-cheat-sheet/

  • @marc3793
    @marc3793 Год назад +6

    One thing I'd really like to see is inter-vlan routing speeds. Hardware offloading etc.
    There seems to be very little information around correctly setting this up without just killing your switches/routers cpu.
    Usually if you're geeky enough to have 10 or 100gb networking then you're going to have vlans. :-)
    It would help me get on the 10/100gb train!

    • @stephenp4440
      @stephenp4440 Год назад +1

      I was going to ask how he setup inter-vlan routing with that Ubiquiti UDM-Pro. I went to 25 Gbps and I had to collapse my VLANs to put all of the 25 Gbps clients in a single VLAN if I still wanted to use the UDM-Pro. An alternative might be to use the Mirotik as the inter-vlan router but I don't know what the speeds are. The UDM-Pro just couldn't cut it for me.

  • @ABRetroCollections
    @ABRetroCollections Год назад +2

    The one thing I like about Mikrotik is I don't have to fight with GBIC and SFP compatibility! Intel, 10Gtek, Nokia (FTTH SFP). It doesn't care and it just works.

  • @mikesbark6626
    @mikesbark6626 Год назад +1

    Oh my gosh, love the t-shirt!!! When I first noticed it I just laughed and laughed. Mostly only network geeks will get it. Well done!

    • @eivinha
      @eivinha Год назад

      Quite outdated for *this* video though 😂

  • @ZaPirate
    @ZaPirate Год назад +9

    never clicked so fast on a video

  • @dctaken
    @dctaken Год назад

    Took me a minute to realize what that shirt means. I'm drinking and eating as I'm watching this and thinking about that shirt. I like it.

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 Год назад

    This comes up on my RUclips feed the day two new 32x200Gbps switches where delivered at work 😂

  • @charlesdesmond1
    @charlesdesmond1 Год назад

    Jeff,
    In this video, you were just like a kid at Christmas, so Merry Christmas! enjoy your massive bandwidth.

  • @djohnsto2
    @djohnsto2 Год назад +3

    I get about the same, ~ 35 Gbps per stream with iperf3, and about 16 Gbps with plain SMB. Using RDMA (SMB-Direct) I can achieve 40 Gbps file copies, but only in one direction. (Uploading from W11Pro4WS to Svr2022) Tried enabling PFC+DCB on my equipment, speed went down. More troubleshooting needed.

  • @NateWheeler1
    @NateWheeler1 Год назад

    Awesome video, Jeff. LOVE Mikrotik!

  • @fourthplanet
    @fourthplanet 10 месяцев назад

    That is fast. I know less than Linus about the linux universe but it is always fun to see the hardware. Also glad you mentioned the tree grinder across the street. For most of the video i was sure that you had a noisy fan near a microphone. Cheers

  • @Davidx_117
    @Davidx_117 Год назад +7

    Those bottom power plugs shown at 9:59 & 13:39 aren't in all the way which is a fire hazard since it could lead to arcing so definitely get those in all the way, and you should get that dust off too so it doesn't get into the sockets.
    Anyway, great vid, it's always fun to sit back and watch your videos

    • @CraftComputing
      @CraftComputing  Год назад +5

      I bumped them while wiggling behind the rack. I did fix them.

    • @Davidx_117
      @Davidx_117 Год назад

      @@CraftComputing Good to hear 👍

  • @KaizenHydraxis
    @KaizenHydraxis Год назад +3

    Curious what distance those single mode optics are rated for? Did you put attenuators on the (preferably) receive ends of the links? You are going to shorten the life on your optics if you are blasting 10km optic power levels over 3meter cables.

  • @l4rzzz
    @l4rzzz Год назад +1

    You made me an alcoholic. Always drinking your episodes with beer :)!

  • @Itay1787
    @Itay1787 Год назад +1

    I have been waiting for this video so much!! And it's 30 minutes! fun!
    Keep creating amazing content!

  • @robertboskind
    @robertboskind Год назад +2

    I'm surprised how long it took me to understand that shirt... I always think OrangeWhite-Orange-GreenWhite-Blue.... in my head

  • @ErikS-
    @ErikS- Год назад +1

    I grew up having a BBS on 9600 baud (9.6 kbps) and the move to 19,200 baud was a giant leap. That was still all over a circuit switched telephone network, i.e. not IP networks yet.
    The speeds you reach in this video are like 10 million times higher... Talk about progress...

  • @ericblenner-hassett3945
    @ericblenner-hassett3945 Год назад +2

    I do not have the kind of speed requirement, files, workloads, etc., that said it is definitely drool worthy. I agree on the ' typical ' workload usage and could do with that kind of speed and storage for my Steam Library and wonder what kind of load times you could get using a net-box on that network.

  • @AmaraTheBarbarian
    @AmaraTheBarbarian Год назад +2

    When you said 100 gigabit full duplex it gave me a full duplex. I've been considering a network upgrade for a bit and I'd very much like to get 10g links in place, but 100g is almost an incomprehensible number.

    • @CraftComputing
      @CraftComputing  Год назад +3

      If your full duplex lasts more than 4 hours, make sure to call your doctor.

  • @NYCMesh
    @NYCMesh Год назад +2

    Our mikrotic gear is cheap but unfortunately buggy. Their OSPF implementation has a memory leak. CCR2004 has a few problems that severely limit bandwidth. Also ROS7 still has some blocker bugs for us.

  • @SB-qm5wg
    @SB-qm5wg Год назад

    I've very happy with my MikroTik gear.

  • @chucksw1
    @chucksw1 Год назад +2

    Nice network, good job!

    • @CraftComputing
      @CraftComputing  Год назад +2

      vkc.sh/product-tag/t568b-cheat-sheet/

    • @nticompass
      @nticompass Год назад

      Looks to be from the RUclips channel "Veronica Explains" 🙂

  • @3k3k3
    @3k3k3 Год назад +1

    Looking forward to the Truenas testing with Cache and all that 🙂

  • @VeronicaExplains
    @VeronicaExplains Год назад +2

    I could run so many guestbooks with hundred gigabit networking.

  • @MrFoof82
    @MrFoof82 Год назад +1

    If you want to push the network, storage, and CPU a bit, database ETL (extract, transform and load) will be a good general stress test.
    Not only are you capable of saturating the network link, but for how long, and for how much data that has to be process, structured, and stored in a database that can then be queried quickly. I'm not sure what's out there for "canned" large footprint ETL benchmarks though.

  • @lightfoot256
    @lightfoot256 Год назад +1

    Was waiting for someone to review this switch in the wild. Interesting regarding the cpu bottleneck. Assuming there’s a hardware DMA workaround that doesn’t involve CPU. We’re getting close to RAM speeds let alone storage.

  • @ProjectSmithTech
    @ProjectSmithTech Год назад

    Great Scott's. That is some seriously impressive gear, thanks for sharing.

  • @ExpressITTechTips
    @ExpressITTechTips Год назад +1

    Imagine Mikrotik being nice enough to send you two of these - we can but dream. Amazing stuff

  • @davidmcken
    @davidmcken Год назад +1

    What MTU do you run? Jumbo frames can help with reducing the packet per second rate which can affect everything from number of interrupts the NICs have to handle which can have beneficial side effects to limit the effect of single-threaded performance on the overall benchmark results.
    As for testing, I would want to know the CPU usage of the tiks while running these tests, its most likely the switch chip will handle the entire data path there can be exceptions. Can you also comment to what type of SFPs you got for that price, I assume nothing past 10km stuff (and normal 1310nm). I am happy to hear that the intel SFPs work in the Melanox cards as past experience with Intel+SFPs is they don't work with anything but themselves (the tiks accept pretty much anything). We have been heavily using broadcom cards as a result unless the setup specifically calls for Intel (seems usually for their offload functionality).

  • @mallon04008
    @mallon04008 Год назад +2

    I don't know if I've seen Jeff this giddy before!

    • @CraftComputing
      @CraftComputing  Год назад +3

      I'm a simple man. I see 100Gb lights, I smile.

  • @MikeKirkpatrick
    @MikeKirkpatrick Год назад +2

    I couldn't help but spit my coffee out when you mentioned how much you paid for those optics....

  • @Ghostdance86
    @Ghostdance86 Год назад +1

    Great video! At this point, do you really even need to copy back the footage to edit it, or can you do that directly on the remote drives (using copy-on-write so you don't overwrite the originals or have to create a copy. Deduplication might net you some interesting savings there, too). I have a funny feeling you might run into issues and bottlenecks with SMB, the File Explorer equivalent or similar. But still, that would be a use case I'd like to see. Maybe you could also try playing some disk-intensive games directly from remote storage and seeing if they remain playable? Just a "totally overkill hardware for gaming" kind of idea. And thanks for the idea of running single-mode over multi-mode if you want "cheap" future upgradability. I'd like to run some fiber lines around the house for the connection between the desktop and the NAS. I'm looking at 10Gbps right now, but it would suck to be stuck there in the future.

  • @osaether
    @osaether Год назад +2

    Great video as allways!
    But why, MikroTik, did you put the AC power inlets on the same side as the network ports?

    • @marcogenovesi8570
      @marcogenovesi8570 Год назад +2

      a lot of networking rack enclosures don't have access in the rear so hotswapping the PSUs would be impossible. This switch is trying to target the broadest possible audience

    • @enthuscimandiri1640
      @enthuscimandiri1640 Год назад

      some rack cannot access the back of equipment like the telecom operator use in bts

    • @SomeMorganSomewhere
      @SomeMorganSomewhere Год назад

      @@marcogenovesi8570 this ^^^ a lot of places you simply can't get to the back of the gear (without disturbing a bunch of other stuff which would defeat the purpose of having the hot swap power supplies)

  • @Jerrec
    @Jerrec 11 месяцев назад

    I also had the same issue with Mellanox ConnectX2 and 4 Cards on some HPE Workstations.

  • @CoryMT
    @CoryMT Год назад +2

    Ok, this is probably dumb. But I'm curious if there is any practical use for a ram drive on your server shared over the 100 Gbps network.
    I'm also curious how a 100 Gbps connection compares to 10 Gbps in latency.
    Also, regarding your future plans to use an NVME pool, I've heard that ZFS actively hinders NVME performance (according to a presentation by Allan Jude from last year), so a different file system for comparison may be interesting.

  • @WillFuI
    @WillFuI Год назад +1

    I would like to see the peak of cloud gaming

  • @-ColorMehJewish-
    @-ColorMehJewish- Год назад

    Damn --- I have not been this jealous in a while lol.
    That's sick tho
    Youre setup gives me something to aim towards... maybe one day

  • @mikejakubik
    @mikejakubik Год назад +1

    To avoid the scheduler on FreeBSD bouncing around on diff cores, use the command cpuset with iperf to lock down the process to a specific core. I was able to achieve much better rates when benchmarking like this.

  • @brandishwar
    @brandishwar Год назад +4

    MikroTik hasn't touched SwitchOS for two years, so no surprise they haven't ported it over to the faster-than-10Gb switches. The latest version is 2.13. And their site is saying the CRS504 is RouterOS v7 only, so they've probably abandoned SwitchOS.
    Given my experience trying to use RouterOS as well, I can say that performance will fall through the floor if you try to actually use that as a router. I had that problem with the aforementioned CRS317, which is why I have an OPNsense router instead.

  • @andyburns
    @andyburns Год назад +1

    100Gb very cool for most businesses let alone home, but that mains lead about to pull out of the wall, can you get 90° NEMA plugs so the strain is downwards?

  • @kevinkrau9876
    @kevinkrau9876 Год назад +1

    I would love to see a iSCSI netwoork boot comparisson between a sata ssd and nvme. In addition to that mount a GameDrive for steam and see what kind of cpu usage is generated through the streamed block storage with that much bandwidth

  • @sagejpc1175
    @sagejpc1175 Год назад +1

    That is one snazzy shirt you got, Jeff!

  • @Clarence-Homelab
    @Clarence-Homelab Год назад +2

    I'm a sucker for fast networking ^^ Awesome video, Jeff.
    Why would RAIDZ-1 be preferrable to a striped mirror (RAID10) configuration in your use case? Other than the the fact that you have more storage space.
    Wouldn't RAID10 be better for write speeds and potential rebuilding of the pool in a worst case scenario?

  • @alexzaslavskis4623
    @alexzaslavskis4623 Год назад

    is really amazing... readly proud for latvian guys... )) mikrotik well done )))

  • @kenzieduckmoo
    @kenzieduckmoo Год назад +2

    at 100gig i think thats when you start to think about using DPUs in your end systems, not just NICs

  • @lugaidster
    @lugaidster Год назад

    I got a Celestica dx010 and can't even get to 25. Still love the overkill. I went with bcachefs for testing my storage in a tiered configuration, I don't have all that many nvme drives to put my data just there.

  • @lemonbrothers3462
    @lemonbrothers3462 Год назад +2

    I think the elephant in the room is how the hell you got those optics for 5$ each

  • @t3chieXandeum
    @t3chieXandeum Год назад +1

    Is there a reason you aren't using the -P (parallel) flag on iperf3? -P 30 would run 30 parallel streams... Not sure if that changes your single thread testing though... Great video, love watching this stuff!

  • @novellahub
    @novellahub Год назад

    Looks like it is Christmas at Crafting Computing!

  • @userperson5259
    @userperson5259 Год назад

    And I just finally upgraded my workstation from an AMD Phenom II x6 1100T to a Haswell i7. (Yes, Haswell). My IP Camera network is Fast Ethernet based (10/100) - and has no bottlenecks. Man I wish I could justify upgrading. Oh well, that's what makes something like so fun to watch. I'm not a huge fan of Mikrotik/Router OS. Much prefer pfsense.

  • @seethruhead7119
    @seethruhead7119 Год назад +1

    I really want to see how you maximize the network.
    I've been meaning to pick up the same router, but I'm not sure how I should be setting up NVME drives to maximize.
    Should I be using a 16x pcie 4.0 slot with a 4x4x4x4x PCIE bifurcation card? Striped + Mirrored?
    What about sata ssds, how fast can those be made in ZFS.
    What about tiered storage with an HDD, SSD, and NVME pool. With the stuff you're actively working on being moved to the NVME pool?
    (

  • @auroran0
    @auroran0 Год назад

    Putting a like on this just for the jazz montage.

  • @ychto
    @ychto Год назад +9

    What, no 400Gbps?

  • @Yuriel1981
    @Yuriel1981 Год назад

    Recent convo with my self: I'm thinking of upgrading the house to a 10gb network network infrastructure.
    Me: cool your gonna need a new switch and nic.
    Also me: yeah I can get a 2x10, 4x2.5 switch and a 10gb rj45 nic, plus cable and supplies to do the house runs for like 500$
    Me: wow cool do you work an IT or Tech job or something?
    Also me: I'm a welder
    Me:........
    Also me: I have a problem.

  • @RemyL75
    @RemyL75 Год назад +4

    FINALLY!!!!!!

  • @CptPatch
    @CptPatch Год назад +7

    I love the t568b shirt and the Boimler shirt and I'm totally jealous of your new network setup. I can barely saturate the 10g on my home network, but my homelab storage server is a whole lot smaller than your monster.

    • @Hitek146
      @Hitek146 Год назад +1

      But don't most people say "orange-white" rather than "white-orange", etc???? Because it's orange insulation with a white stripe on it?

    • @VeronicaExplains
      @VeronicaExplains Год назад +1

      @@Hitek146 I learned it as "white-orange" because verbally it's easier for me to differentiate. Saying "orange" twice in a row makes me lose track. Plus, I don't think the design would have worked as well with the colors all on the left-hand side of the shirt.

    • @Hitek146
      @Hitek146 Год назад +1

      @@VeronicaExplains I agree that the colors all being on the left would be less visually attractive, but I think you have it backwards about the verbal repetition. Putting "white" first means that you are saying "orange" twice, which is what you say is what makes you lose track. Saying "white-orange, orange" puts the two oranges together, while saying "orange-white, orange" separates the two orange words. I always say "orange-white, orange, green-white, blue, blue-white, green, brown-white, brown", only putting the two blues together, rather than putting the two oranges and browns together. Plus, in my experience terminating old-school telephony cabling, where there can be hundreds of pairs in one bundle of many various colors, including purple, the stripe is always said last...

    • @VeronicaExplains
      @VeronicaExplains Год назад +1

      @Hitek146 I don't think I have it backwards from my angle, since I know what I remember in the server room (you do you though). Besides, it's a t-shirt, the design was the most important part for me.

  • @lepsycho3691
    @lepsycho3691 Год назад

    Well I guess I'm done waiting for the 40gig Brocade icx-6610 part deux!

  • @vip_bimmervip_bimmer8033
    @vip_bimmervip_bimmer8033 Год назад

    Question, is there any place I could find a NR1200 storage server? Your link for it is down so I was wondering if you can still buy them for cheap. Thanks!

  • @andiszile
    @andiszile Год назад

    With 100G and DPU you could boot off network as it was local nVME drive. That would be interesting project, albeit expensive one.

  • @wayland7150
    @wayland7150 Год назад

    I like the Ethernet colour code T-shirt.

  • @itunassub
    @itunassub Год назад

    I NEED to know where you got that shirt, it would be an absolute hit at work.

  • @jgirlyt
    @jgirlyt Год назад

    Installing multi-mode fiber, omg what were you thinking oh the humanity

  • @pewpewpew8390
    @pewpewpew8390 Год назад

    you need a bidi qsfp28, and you can run 100g on your om, but cheaper to put in smf yes.

  • @1leggeddog
    @1leggeddog Год назад +1

    I want that wiring t-shirt!
    white orange
    orange....

  • @cloudcultdev
    @cloudcultdev Год назад

    Nice cable shirt!! Where did you pick that up at: I want one!

  • @stormshadow7210
    @stormshadow7210 Год назад

    Nice Video! Would it be possiable to see how long it takes to do a Snapshot of a VM?

  • @goldmax1412
    @goldmax1412 Год назад +1

    I had a weired idea to try using high speed nvme storage with 100Gb connection as RAM on older systems (ddr2/ddr3). To finally have the opportunity to download the RAM via the Internet. But this is difficult to implement because you will need a special custom DIMMs with a 100GB network connection and software to run it.

    • @ewenchan1239
      @ewenchan1239 Год назад +1

      Install very little physical RAM on your system, put your swap partition on that, and use swap-as-RAM.
      You used to be able to do this with GlusterFS version 3.7.
      Been there, done that.

    • @goldmax1412
      @goldmax1412 Год назад

      @@ewenchan1239 nah, it won't be the same as purely "internet" RAM sticks. And how will you do it on a windows machine?

  • @berndeckenfels
    @berndeckenfels Год назад

    Hm, can’t you break out multiple virtual devices to spread out the interrupt load and send queues?

  • @stephenharrison7479
    @stephenharrison7479 Год назад +1

    I'd love to know Mikrotik's decision behind all those power input options. Dual mains input, sure, makes sense. POE, yep, good idea at that power level, random extra DC input via terminal block connector, wait what? Please tell me theirs some crazy engineer at Mikrotik that, like me, has a garage without mains power and a dodgy solar setup, and they wanted to run some 100G setup out there directly from that! (I suspect it's more likely something to do with data centre's having a low voltage feed from UPS's / shared PSU's rather than dedicated PSU's in various boxes and all that - but it's kind of an interesting option to include when panel space wise when some other network connection might have been nice).
    Probably just a home lab issue, but sure would be nice to have one (or more 🙂) 10G port option for WAN/LAN inter-connect rather than having to use one of the 100G's split into 4s, and have one as the link to the regular WAN/LAN. At the price point they've got to be thinking a bunch of us crazy home lab's ( 🙂hello 🙂) are going to be buying this right. (obviously I'm hoping Mikrotik are reading this - and as someone who has multiple switches with 2 40Gbe ports on it, that's also super frustrating when just one (or two) extra 40Gbe's more would have made such a difference due to uplink switch connection usage taking them both and not allowing a server to be 40Gbe linked as well, but I digress....).
    For other videos - I'd like to see more details on the various optics / cable selections, having messed with 40G/10G stuff I've found the optics for SPF+/SPF28/QSFP+/others to be quite a minefield, and quite unclear how well 10/25/40/100 play together. great if you can get them at a few $, but at higher prices it's not fun to be figuring it out. I've started down the 8/12 core MPO fiber for the 40G stuff, does that work in a 100G switch? (as in, are optics sensibly priced, available and work, does 8 work or is the full 12 fiber thing needed), are there single mode QSFP's for 40G (and 10G) - what are the options for nice keystone connections (so far I've seen single/multimode only - I'm guessing size wise that's our only choice really) - I could go on.
    Also, great video 😊thanks. I nearly unsubscribed at the cocktail part, but then found I wasn't actually subscribed (not sure what happened there, I thought I was, maybe YT tricked me!), so I subscribed despite the cocktail....

    • @hopkinssm1
      @hopkinssm1 Год назад +4

      Data centers and certain telecom cabinet installs prefer DC power, as they can convert it much more efficiently/ cheaper at scale then on each individual unit.
      Ironically, so.ilar to, but the exact opposite of how Google has small DC batteries installed in the tray with every server motherboard so they don't have to worry about whole data center power redundancy.

    • @marcogenovesi8570
      @marcogenovesi8570 Год назад +2

      This is the patented Mikrotik shotgun approach, they want to target the biggest possible crowd and adding PoE and terminal block connectors is very simple and cheap. Adding more connectivity would mean increasing the cost significantly and it's not what they are going for with these switches. For what they are (modern low power 100gb) it's insanely cheap already.
      40G is not inter-operable with 10/25/100/400G and as a technology is a dead end as everyone has jumped ship to the 10/25/100/400g train years ago (hence why the 40g stuff is cheap and plentiful on ebay)

    • @SomeMorganSomewhere
      @SomeMorganSomewhere Год назад

      @@hopkinssm1 Also pretty much every major networking vendor has DC input options (or in some cases as a standard feature) on their networking equipment so if you want to play in that space you need to do the same.

  • @snapsetup
    @snapsetup Год назад

    Wow. Overkill - Love it!

  • @FlaxTheSeedOne
    @FlaxTheSeedOne Год назад

    Neext to the person suggesting RDMA, The Nvme Pool should be a raid 10. a z1 will have over head in terms of block distribution, and be a cpu hog when writing at those speeds

  • @NicoKatzinger
    @NicoKatzinger Год назад

    please add the command (and maybe a link to the documentation) to change the switching mode of the network cards :)

  • @jacketpotato2058
    @jacketpotato2058 Год назад +3

    Nice!

  • @dragunzonline
    @dragunzonline Год назад

    I had trouble using the connectx-4 on a board as well. It wouldn't boot with my U.2 16x bifurcated card installed. They can definitely be finicky.

  • @Clarence-Homelab
    @Clarence-Homelab Год назад

    that zoom in on the cocktail shaking
    hahahahahahaha

  • @WebSitePros
    @WebSitePros Год назад

    Why did you try the bandwidth utility built into Mikrotik to run switch to swtich to see what it could do?