Kinda offtopic but seeing KDE software being used in one of my favorite youtubers videos always makes me happy :D (I'm one of the many devs, hi! Hope everything is working well for ya!)
KDE rocks and it is always the DE I recommend people try! It's been great though I do have one minor issue with the new floating panel I've meaning to check if it's been reported. The show/hide shortcut crashes all of plasma when I try to change it.
I can sympathize, being lazy with my home setup. I plugged in a used pair of powerline adapters I got on Ebay; plugged one into a wall outlet where my DSL router sits, another one near my two Linux rigs. That's the extent of networking I've been willing to do at home.
Hey I'm a field service tech for Schneider Electric working exclusively with UPS' in SLC, if you're willing to pay for shipping I might be able to get my hands on a functioning UPS that a customer is getting rid of for you; or bring one down next time they send me to Phoenix. My colleague also has a Galaxy VS he's trying to get rid of but that's probably way overkill for you. I say that only because finding the batteries for the apc guy you've got is probably gonna be difficult. Home depot is getting rid of their apc lx's in droves
Good job getting those networks set up -- yeah, cluster computers often have a similar configuration with one network for BMCs, one for lower-speed access to the OS running on the nodes, and another high-speed network. And it's great you spent time labeling things nicely -- always helpful in the future. Yeah, I was cringing a bit seeing that ConnectX-3 card, since I knew it might have compatibility issues. Mellanox/Nvidia dropping support for that generation of hardware bit me on a couple systems a few years back. Glad you got the Ethernet mode setting figured out. I haven't had to directly deal with transceiver compatibility, but I've seen other people at work moan and groan about it. Nice improvements, and hopefully they'll last you quite a while.
Thank you! I've learned with stuff like this it's time well spent to over document things to make it easier to figure out what is going on when you inevitably have to fix something later. The Mellanox stuff was a bit of a curveball. Thankfully in the final setup I don't need the official config tools, so I was fine setting them up on an older system first and just moving them over after.
@@TechTangents We used to have to set our servers into bios mode, boot into the cx3 interface, switch to Ethernet mode, and then re-swich to the box to uefi, was definitely very cumbersome
it's the way - they say what is good cable - does it always need shielding - is there something like poorman added shielding - or just a useless attempt ?
@@Tetrodatoxin Yeah, I was figuring the option ROM on the card might have the ability to change that setting too, but wasn't sure. It's certainly possible to have the UEFI equivalent of that, but you have to find the right way to enter it from the system firmware screens, and I don't know if these cards had that (or if they did, you might have needed the right firmware version for the right CPU type flashed on the card in the first place)
One thing about the switching throughput: Add up all switchport speeds and double it for full-duplex and you have the total throughput. If you get a switch that cannot handle all ports at full speed at once, you are either looking at the "el cheapo special" you definitely should not buy or some kind of specialty switch I have never seen. So for the QNAPs it's 2 ports@10Gbit/s + 4 ports@2.5Gbit/s all times 2 because full duplex. So ( 20+10 ) x2 = 60Gbit/s switching capacity.
There used to be times before bandwidth was "limitless" when over-subscription was very much the norm. And since we collectively survived that period in computing, I would argue that unless an application explicitly demands it, not being able to serve all switchports at max speed is entirely acceptable.
Networking is not a "middle manager". It's the concrete foundation on which everything stands. Without the network, that box is just a giant noisy heater - and not even a good one. Unplug all of your network cables, turn off all the access-points, and disable cellular data. How useful is your house full of computers now? Tell me networking is "middle management" now. (I grew up in the world before the internet.)
wow. I've been running 10G LAN in my apartment for almost 10 years, but never IB. A few friends do HPC at universities... think they still have >=100G IB
From memory the gig port on the Mikrotik is directly connected to the management CPU, not the switch chip. If that's the case all gig traffic is software switched, so you won't get amazing performance but at least you have connectivity.
I have the same issue with my DAC cables. I bought USB keystones for Home Assistant and just pushed the cable through the empty hole for now because it's simpler than finding cables when I already have long extensions plugged in.
10G copper heat and consumption is kinda solved, check out Marvell AQC107 (and others), it has like 6W TDP. Or one could go 5Gbit with the RTL8126, which even ship and work normally without a heatsink at all.
The main issue has been using SFP+ to RJ54 10GBE modules, which might overload some switches when using many at once. For 1-2 connections or dedicated copper only switches you should really be good by now. But then again, that mostly applies to using existing structured cabling towards workstations, rather than server interconnects.
60Gbps isn't actually a lot for the qnap switch considering all the ports are full duplex. If every port was being saturated both up and down, all 60Gbps of capacity would be used.
Aw, I wish I knew you were doing networking stuff, that's my wheelhouse! For the core switch I would've recommended a Brocade ICX6610. It's got plenty of SFP+ ports and is one of the few relatively affordable switches with native 40 gig QSFP ports. There's a great thread on ServeTheHome about setting them up for a homelab environment. Power consumption isn't the greatest, but it's a hell of a lot better than that 400 watt HP monster. (Seriously, early 10 gig stuff isn't worth messing with outside of a curiosity.)
Ugh, I've got PTSD from those APC UPS's. That particular model has a design flaw where when the battery goes bad, and it will, the battery will bulge and jam against the frame. This sucks because the front of the battery slot has a lip on it at the top that the battery gets stuck behind when it bulges. I lost many an hour having to pry batteries out with a couple of screw drivers and lots of swearing.
I can't believe the Aruba has an option for using unsupported hardware. It wasn't an feature in the older hardware when it was called HP Procurve. I've used HPE compatible DACs from FS to connect HPE/Aruba switches to Dell S-Series switches. HPE/Ariba is the only manufacturer I've worked with that actively blocks non-approved SFP hardware.
It's been there for a long time. Procurve was the "cheap" SOHO / business line; they don't care what you plug into them. The bigger ComOS switches do care, but can still use anything. (for the record, I've had HPE switches bitch about HPE's own cables.)
18:17 I keep the drives in my NAS spinning. picking a random drive in the array, I see about 60 power cycles in more than 62000 spin hours. and a large number of those power cycles were from moving drives between bays and enclosures.
The two switches setup is similar to what they do inside the communication rooms on Best Buy stores. That setup is alright yet they have 2 backup battery systems for their switches for redundancy. That might be something you might have to consider in the near future or if the office starts offering/receiving fiber network.
I have a drives spinning down after 30 minutes in my server, and the they are happy for 10 years now (WD Reds) ... HDDs are way more robust than people make them out to be.
I had a scare when I had a drive go dark with spin down enabled - but it wasn’t built for NAS operation and I think it was spinning up and down multiple times an hour. A reboot brought it back into service. So as long as you use the appropriate drive and set it up right - I agree lol
this is pretty neat at my home since its a rental I don't have the ability to run Ethernet in my walls let alone fiber so my network consist of multiple routers acting as access points for the main one down stairs. and Moca ethernet for connecting the routers together. we have total house coverage. the Moca is only gigabit but can be upgraded to 2.5Gig networking if I want. and its MOCA 2.5
If you're interested, you can access HP's software downloads for your Aruba switch if you have your own domain and an email tied to it. That's how I got mine, and if my switch has downloads then yours definitely does (Mine's an HP ProCurve, absolutely ancient). The newer web management interface is really, really nice to use compared to the one yours is running.
yes SIR!!! I made the switch maybe two years ago for internal network and WHAT a difference it made...i use pfsense in an old hp desktop with an x710-da4 and a realtek thing then two 8-port generic 10G sfp+ switches and that's enough for me with low wattage....it made an ENORMOUS difference though, its snappiness, not even file xfers but that SNAPPINESS that you feel
the batteries in that APC UPS probably blew because it was overcharging them. APC switched to using shitty carbon-composition resistors for the battery voltage sense that end up drifting over time in the early 2000's, which end up slowly increasing the float charge voltage into electrolyte-boiling territory. this can be fixed (via software, temporarily, permanently with metal foil resistor replacements), but I've successfully converted two of those (in desk and rackmount form) to LiFePO4 batteries that're more than happy to take a 14 volt float charge after using APC-FiX to recalibrate the battery runtime constant, and adding a battery balancer (optional). on my older SU1400 I actually ended up adding a trimpot to the voltage divider so I could adjust the float voltage manually, as those do not support doing that via software.
fyi; theres no need to fear the networking command line tools. depending on the OS you either edit /etc/network/interfaces or just run "nmtui" and edit it in a semi graphical way. (Ok, some OSes might have some other network config, but those 2 are the main ones)
I might have a overkill switch at home Ubiquiti USW-Leaf 48 port 10G/25G and 6 port 40G/100G idle at 87W with 1.8 Tbps switching capacity with only 600Mbps down internet. No RDMA support supported on switch unfortunately other than that the best switch I've ever owned and almost silent.
The hardware is a bit different but the theory is pretty much the same as I have in my colo. I'm able to do everything remotely, which is nice. I'm about a 4 hour drive to the DC. Also, the description of those power strips on Amazon is quite amusing. I'm not sure why they don't use machine translation as it's surely better than when they do it themselves.
If you can get drives to idle, that's awesome. I've never been able to get my 100 HGST drives to spin down. Just 60 of them in one case (no PC) is 517W on average.
When you were talking about moving the server, you almost seemed embarrassed to be using a piece of equipment. In my experience working in a lot of fields where heavy equipment is involved, I have come to find two types of people. Those that use equipment effectively and unapologetically, or prideful 'strong' guys with bad backs that feel that it's more manly to damage themselves than use tools. I use a hydraulic table all the time, myself. It's an amazing tool for when I need to move something heavy, but also precisely. Or in some cases where that heavy object might also be somewhat fragile and needs to be handled with care.
It's more that I like to fully explain the "why" I'm doing things to provide insight into my reasoning. In this case even with the lighter server it can be beneficial to use the lift for its fine grained control over the height and alignment of the rails. It is of course extremely helpful for things that are too heavy to move, I have some 160lb hard drives that were my main inspiration. There is definitely no shame in using tools to help you work.
I'm surprised you don't really do networking things. It's interesting to me because I do it for fun an expect most people to understand this stuff and wanna buy networking geat to improve their home connectivity. Then again, I hate dealing with security and VLANs, so I completely understand.
That series of aruba switches are really simple to use, they use the older arubaOS witch is really simple. Aruba changed the newer switches to be port centric rather than the old ones that are VLAN centric. Seems like the 2920 and 2530 series of switches run forever, i have had very few failures with these switches and they were mostly caused by lightning strikes. The newer ArubaOs-CX has way diffrent commands and ways to do things, the VLAN config is a bit simplified and easier, but there is an a steep learning curve when coming from older aruba switches, thats because mostly the commands have changed drasticly.
For what it's worth, it's very hard for iperf to fully saturate a fast connection like those. Even when the hardware is fully capable, there's all sorts of overhead on the software side that limits 'full' saturation. Also I _think_ iperf has a habit of not including frame headers in it's bandwidth stats which shaves a bit more off, but it's been a while. Point is, those speed readings seem mostly right, and I'd only worry about the 40 gig DAC if you are getting outright _errors_ on frames sent through it.
Cool to see the network setup! Also, I do think it's kind of funny that you hate networking so much. I've always found it fascenating, and it's honestly not that hard to learn the fundamentals of routing and the different protocols in the stack. The protocols themselves are pretty dumb, and they were built to be because the computers that were supposed to run them back in the day were also pretty dumb. And this is why you can do silly, silly things like running a basic web server on a microcontroller. Also if you run servers in VMs on your own hardware and want ANY of it to work well then you *really* need to learn networking lol
Due to racks generally being located on the very back of a room or closet. You won't have room to access the back of them. So, it makes no sense to have the switches face any other way than toward the front.
Him: "i'm gonna work smarter, not harder" .. "cause, i cannot lift the server" Also Him: lifts the server from the table on the material lift - instead of sliding in onto it ..
Weird that you had such heat problems with your X520 card.. I've been using the same card (albeit probably a newer revision of the X520 chipset) for quite some time, and the card never exceeded the 8 watts specified in the datasheet. I've only recently switched it out from the copper version to the SFP+ version when I migrated to fibre cabling, and the new card also doesn't get crazy hot. Not that much difference in surface temp between the NPU and the SFP+ modules..
So re the Mikrotik: Yes, yes you can. It means you can forgo having the PSU at the back to power the switch. Also, it also means if the PSU at the back fails, the switch will continue... redundancy for free.
Trouble with the DACs isn't at all surprising. There're all kinds of strange incompatibilities with them. Optics and fiber shouldn't be that much more expensive, but have much better link stability, or look at "AOC" cables. That's "Active Optical Cable". They look a lot like DACs, but inside they're two regular optics connected by a fixed fiber.
DACs themselves are actually pretty simple and shouldn't have interoperability issues and the issue is usually the "coding" on the endpoints and whether the network switches and cards are okay with it. While Cisco doesn't want you to do it, you can get greyish market cables that are flashed with the correct vendor strings to make it look like an expensive vendor part (and they basically do the exact same thing but with a warranty)
Yeah, I don't know why networking is so un-fun, but I also do the bare minimum to get my home networks up and running. Fiber looks awesomely fast though, so hope you keep enjoying now that it's working :)
Networking is piss easy. You take the thing, you stick it in the other thing. What's so difficult about it? *A WILD NETWORK PRINTER APPEARS* No... No.... NOOOOOOOOOOO!!!!!!!!!!!
I hate networking and IT stuff in general. It is not just the middle management of computing, it's the used-car sales force of computing. :) You have the patience of Job to do some of those things.
Ah yes, Microtik, comes with free backdoors. As for fiber issues, have you checked dmesg for any nic issues, or checked the power levels of the fiber? Epyc chips still have numa nodes between the chiplets.
Other vendors call their backdoor "cloud integrated". But yes, while amazing price/performance, Mikrotik are not security-by-default, which requires considerable legwork on the operators end to create. Epyc (and Threadripper, and to an extent Ryzen) is wild in the NUMA-department. I have been told in very clever words that that is pain for software devs in Windows-land.
As a pure server novice myself, I feel stressed every time I see the internals of a rackmount server. Just give me a happy full tower case and I'll be content. That being said, I hope I will at some point be able to justify a nice 4U for home. I will master my fears.
Kinda offtopic but seeing KDE software being used in one of my favorite youtubers videos always makes me happy :D (I'm one of the many devs, hi! Hope everything is working well for ya!)
KDE rocks and it is always the DE I recommend people try! It's been great though I do have one minor issue with the new floating panel I've meaning to check if it's been reported. The show/hide shortcut crashes all of plasma when I try to change it.
You: "I hate networking"
Me, a network administrator: "meeee tooooo"
I don't think anyone "likes" networking, even though I know how to properly build a home network I just... don't because :effort: lmfao.
same, but hate software development more. I don't have that kind of patience lol
I can sympathize, being lazy with my home setup.
I plugged in a used pair of powerline adapters I got on Ebay; plugged one into a wall outlet where my DSL router sits, another one near my two Linux rigs.
That's the extent of networking I've been willing to do at home.
Hey I'm a field service tech for Schneider Electric working exclusively with UPS' in SLC, if you're willing to pay for shipping I might be able to get my hands on a functioning UPS that a customer is getting rid of for you; or bring one down next time they send me to Phoenix. My colleague also has a Galaxy VS he's trying to get rid of but that's probably way overkill for you. I say that only because finding the batteries for the apc guy you've got is probably gonna be difficult. Home depot is getting rid of their apc lx's in droves
Awesome!
Good job getting those networks set up -- yeah, cluster computers often have a similar configuration with one network for BMCs, one for lower-speed access to the OS running on the nodes, and another high-speed network. And it's great you spent time labeling things nicely -- always helpful in the future.
Yeah, I was cringing a bit seeing that ConnectX-3 card, since I knew it might have compatibility issues. Mellanox/Nvidia dropping support for that generation of hardware bit me on a couple systems a few years back. Glad you got the Ethernet mode setting figured out. I haven't had to directly deal with transceiver compatibility, but I've seen other people at work moan and groan about it.
Nice improvements, and hopefully they'll last you quite a while.
Thank you! I've learned with stuff like this it's time well spent to over document things to make it easier to figure out what is going on when you inevitably have to fix something later.
The Mellanox stuff was a bit of a curveball. Thankfully in the final setup I don't need the official config tools, so I was fine setting them up on an older system first and just moving them over after.
@@TechTangents We used to have to set our servers into bios mode, boot into the cx3 interface, switch to Ethernet mode, and then re-swich to the box to uefi, was definitely very cumbersome
it's the way - they say
what is good cable - does it always need shielding - is there something like poorman added shielding - or just a useless attempt ?
@@Tetrodatoxin Yeah, I was figuring the option ROM on the card might have the ability to change that setting too, but wasn't sure. It's certainly possible to have the UEFI equivalent of that, but you have to find the right way to enter it from the system firmware screens, and I don't know if these cards had that (or if they did, you might have needed the right firmware version for the right CPU type flashed on the card in the first place)
One thing about the switching throughput: Add up all switchport speeds and double it for full-duplex and you have the total throughput. If you get a switch that cannot handle all ports at full speed at once, you are either looking at the "el cheapo special" you definitely should not buy or some kind of specialty switch I have never seen.
So for the QNAPs it's 2 ports@10Gbit/s + 4 ports@2.5Gbit/s all times 2 because full duplex. So ( 20+10 ) x2 = 60Gbit/s switching capacity.
There used to be times before bandwidth was "limitless" when over-subscription was very much the norm. And since we collectively survived that period in computing, I would argue that unless an application explicitly demands it, not being able to serve all switchports at max speed is entirely acceptable.
Networking is not a "middle manager". It's the concrete foundation on which everything stands. Without the network, that box is just a giant noisy heater - and not even a good one. Unplug all of your network cables, turn off all the access-points, and disable cellular data. How useful is your house full of computers now? Tell me networking is "middle management" now.
(I grew up in the world before the internet.)
You mentioned Infiniband, and I just realized it's been **18 years** since I last worked with it. My daughter was just a few months old at the time.
wow. I've been running 10G LAN in my apartment for almost 10 years, but never IB.
A few friends do HPC at universities... think they still have >=100G IB
From memory the gig port on the Mikrotik is directly connected to the management CPU, not the switch chip. If that's the case all gig traffic is software switched, so you won't get amazing performance but at least you have connectivity.
I have the same issue with my DAC cables. I bought USB keystones for Home Assistant and just pushed the cable through the empty hole for now because it's simpler than finding cables when I already have long extensions plugged in.
10G copper heat and consumption is kinda solved, check out Marvell AQC107 (and others), it has like 6W TDP.
Or one could go 5Gbit with the RTL8126, which even ship and work normally without a heatsink at all.
The main issue has been using SFP+ to RJ54 10GBE modules, which might overload some switches when using many at once. For 1-2 connections or dedicated copper only switches you should really be good by now. But then again, that mostly applies to using existing structured cabling towards workstations, rather than server interconnects.
It wasn't a decade ago. Thus the issue with those old cards.
60Gbps isn't actually a lot for the qnap switch considering all the ports are full duplex. If every port was being saturated both up and down, all 60Gbps of capacity would be used.
whoa buddy, no fiber for capswiki?
you want me to stand around all week waiting for that 28 kilobyte file?
Aw, I wish I knew you were doing networking stuff, that's my wheelhouse! For the core switch I would've recommended a Brocade ICX6610. It's got plenty of SFP+ ports and is one of the few relatively affordable switches with native 40 gig QSFP ports. There's a great thread on ServeTheHome about setting them up for a homelab environment. Power consumption isn't the greatest, but it's a hell of a lot better than that 400 watt HP monster. (Seriously, early 10 gig stuff isn't worth messing with outside of a curiosity.)
Ugh, I've got PTSD from those APC UPS's. That particular model has a design flaw where when the battery goes bad, and it will, the battery will bulge and jam against the frame. This sucks because the front of the battery slot has a lip on it at the top that the battery gets stuck behind when it bulges. I lost many an hour having to pry batteries out with a couple of screw drivers and lots of swearing.
OMG, that's so smart. Putting your power cable out the top! I have all my batteries at the bottom of my racks, and I keep running into the UPS wires!
I can't believe the Aruba has an option for using unsupported hardware. It wasn't an feature in the older hardware when it was called HP Procurve. I've used HPE compatible DACs from FS to connect HPE/Aruba switches to Dell S-Series switches. HPE/Ariba is the only manufacturer I've worked with that actively blocks non-approved SFP hardware.
It's been there for a long time. Procurve was the "cheap" SOHO / business line; they don't care what you plug into them. The bigger ComOS switches do care, but can still use anything. (for the record, I've had HPE switches bitch about HPE's own cables.)
There is a lot of hostage taking going on with the big networking brands.
18:17 I keep the drives in my NAS spinning. picking a random drive in the array, I see about 60 power cycles in more than 62000 spin hours. and a large number of those power cycles were from moving drives between bays and enclosures.
For your DAC Problem, you can buy cheap dac cables with pre configured "vendor" even with different vendors on both ends
The two switches setup is similar to what they do inside the communication rooms on Best Buy stores. That setup is alright yet they have 2 backup battery systems for their switches for redundancy. That might be something you might have to consider in the near future or if the office starts offering/receiving fiber network.
I love the green so much
I have a drives spinning down after 30 minutes in my server, and the they are happy for 10 years now (WD Reds) ... HDDs are way more robust than people make them out to be.
thank you. "spindown kills drives" is such a stupid myth that gets repeated all too often.
@@sneak3009hdds aren't internal combustion engines. 24/7 operation is obviously more demanding than giving hdds some rest time here and there
I had a scare when I had a drive go dark with spin down enabled - but it wasn’t built for NAS operation and I think it was spinning up and down multiple times an hour. A reboot brought it back into service. So as long as you use the appropriate drive and set it up right - I agree lol
this is pretty neat at my home since its a rental I don't have the ability to run Ethernet in my walls let alone fiber so my network consist of multiple routers acting as access points for the main one down stairs. and Moca ethernet for connecting the routers together. we have total house coverage. the Moca is only gigabit but can be upgraded to 2.5Gig networking if I want. and its MOCA 2.5
If you're interested, you can access HP's software downloads for your Aruba switch if you have your own domain and an email tied to it. That's how I got mine, and if my switch has downloads then yours definitely does (Mine's an HP ProCurve, absolutely ancient). The newer web management interface is really, really nice to use compared to the one yours is running.
yes SIR!!! I made the switch maybe two years ago for internal network and WHAT a difference it made...i use pfsense in an old hp desktop with an x710-da4 and a realtek thing then two 8-port generic 10G sfp+ switches and that's enough for me with low wattage....it made an ENORMOUS difference though, its snappiness, not even file xfers but that SNAPPINESS that you feel
I had never considered using other keystone modules in a patch panel! Mainly because I've only ever used non-modular panels...
I rarely have a clue what you are talking about, but you are my favourite youtuber. You are living my best life
the batteries in that APC UPS probably blew because it was overcharging them. APC switched to using shitty carbon-composition resistors for the battery voltage sense that end up drifting over time in the early 2000's, which end up slowly increasing the float charge voltage into electrolyte-boiling territory. this can be fixed (via software, temporarily, permanently with metal foil resistor replacements), but I've successfully converted two of those (in desk and rackmount form) to LiFePO4 batteries that're more than happy to take a 14 volt float charge after using APC-FiX to recalibrate the battery runtime constant, and adding a battery balancer (optional). on my older SU1400 I actually ended up adding a trimpot to the voltage divider so I could adjust the float voltage manually, as those do not support doing that via software.
I remember when you go the office, and it seemed like so much space!
Remember the good old days when Lantastic was all you needed?
30:34 Where in the Valley are you? My mom just got whatever CenturyLink is branded now fiber in Surprise replacing her Cox cable internet.
fyi; theres no need to fear the networking command line tools. depending on the OS you either edit /etc/network/interfaces or just run "nmtui" and edit it in a semi graphical way. (Ok, some OSes might have some other network config, but those 2 are the main ones)
Networking is witchcraft to me. I feel you.
I used to heat my house with servers in the winter. True story.
I still do. (well, office.)
I might have a overkill switch at home Ubiquiti USW-Leaf 48 port 10G/25G and 6 port 40G/100G idle at 87W with 1.8 Tbps switching capacity with only 600Mbps down internet. No RDMA support supported on switch unfortunately other than that the best switch I've ever owned and almost silent.
The hardware is a bit different but the theory is pretty much the same as I have in my colo. I'm able to do everything remotely, which is nice. I'm about a 4 hour drive to the DC.
Also, the description of those power strips on Amazon is quite amusing. I'm not sure why they don't use machine translation as it's surely better than when they do it themselves.
Network is life! System Admins are nerds! 7:03
Infiniband is awesome.
I am running 100 Gbps Infiniband with a 36-port 100 Gbps Infiniband switch. It's AWESOME!
Possibly at the end of the year, Hopefully you'll be good at this.
Fiber transceivers are much more power efficient than copper. In my experience, they are also more reliable.
If you can get drives to idle, that's awesome. I've never been able to get my 100 HGST drives to spin down.
Just 60 of them in one case (no PC) is 517W on average.
When you were talking about moving the server, you almost seemed embarrassed to be using a piece of equipment. In my experience working in a lot of fields where heavy equipment is involved, I have come to find two types of people. Those that use equipment effectively and unapologetically, or prideful 'strong' guys with bad backs that feel that it's more manly to damage themselves than use tools.
I use a hydraulic table all the time, myself. It's an amazing tool for when I need to move something heavy, but also precisely. Or in some cases where that heavy object might also be somewhat fragile and needs to be handled with care.
It's more that I like to fully explain the "why" I'm doing things to provide insight into my reasoning. In this case even with the lighter server it can be beneficial to use the lift for its fine grained control over the height and alignment of the rails. It is of course extremely helpful for things that are too heavy to move, I have some 160lb hard drives that were my main inspiration.
There is definitely no shame in using tools to help you work.
I'm surprised you don't really do networking things. It's interesting to me because I do it for fun an expect most people to understand this stuff and wanna buy networking geat to improve their home connectivity. Then again, I hate dealing with security and VLANs, so I completely understand.
I love your videos and also specialize in networking as my profession. I'd love to answer any questions if you're looking for advice.
Great video 👌👌👌
That series of aruba switches are really simple to use, they use the older arubaOS witch is really simple. Aruba changed the newer switches to be port centric rather than the old ones that are VLAN centric. Seems like the 2920 and 2530 series of switches run forever, i have had very few failures with these switches and they were mostly caused by lightning strikes.
The newer ArubaOs-CX has way diffrent commands and ways to do things, the VLAN config is a bit simplified and easier, but there is an a steep learning curve when coming from older aruba switches, thats because mostly the commands have changed drasticly.
Why the peanut butter on the shelf?
For what it's worth, it's very hard for iperf to fully saturate a fast connection like those. Even when the hardware is fully capable, there's all sorts of overhead on the software side that limits 'full' saturation. Also I _think_ iperf has a habit of not including frame headers in it's bandwidth stats which shaves a bit more off, but it's been a while.
Point is, those speed readings seem mostly right, and I'd only worry about the 40 gig DAC if you are getting outright _errors_ on frames sent through it.
Cool to see the network setup! Also, I do think it's kind of funny that you hate networking so much. I've always found it fascenating, and it's honestly not that hard to learn the fundamentals of routing and the different protocols in the stack. The protocols themselves are pretty dumb, and they were built to be because the computers that were supposed to run them back in the day were also pretty dumb. And this is why you can do silly, silly things like running a basic web server on a microcontroller.
Also if you run servers in VMs on your own hardware and want ANY of it to work well then you *really* need to learn networking lol
Due to racks generally being located on the very back of a room or closet. You won't have room to access the back of them. So, it makes no sense to have the switches face any other way than toward the front.
damn.. you scared me right there - what a first sentence :D
Yup same here, I am using Unifi & soon will have Google Fiber 10gb plan.
Him: "i'm gonna work smarter, not harder" .. "cause, i cannot lift the server"
Also Him: lifts the server from the table on the material lift - instead of sliding in onto it ..
Weird that you had such heat problems with your X520 card.. I've been using the same card (albeit probably a newer revision of the X520 chipset) for quite some time, and the card never exceeded the 8 watts specified in the datasheet. I've only recently switched it out from the copper version to the SFP+ version when I migrated to fibre cabling, and the new card also doesn't get crazy hot. Not that much difference in surface temp between the NPU and the SFP+ modules..
So re the Mikrotik:
Yes, yes you can.
It means you can forgo having the PSU at the back to power the switch.
Also, it also means if the PSU at the back fails, the switch will continue... redundancy for free.
To get unsupported transceivers working on the Aruba Switches you have to replug the cables After using the command
Trouble with the DACs isn't at all surprising. There're all kinds of strange incompatibilities with them. Optics and fiber shouldn't be that much more expensive, but have much better link stability, or look at "AOC" cables. That's "Active Optical Cable". They look a lot like DACs, but inside they're two regular optics connected by a fixed fiber.
DACs themselves are actually pretty simple and shouldn't have interoperability issues and the issue is usually the "coding" on the endpoints and whether the network switches and cards are okay with it. While Cisco doesn't want you to do it, you can get greyish market cables that are flashed with the correct vendor strings to make it look like an expensive vendor part (and they basically do the exact same thing but with a warranty)
first thought seeing the HP router... huh power must be cheap there... lol
36:31 I would drill the wall here, no thinking about going around. I know it's crazy, but still...
"HydroThunder!"
Yeah, I don't know why networking is so un-fun, but I also do the bare minimum to get my home networks up and running. Fiber looks awesomely fast though, so hope you keep enjoying now that it's working :)
Can You show more videos like doom on 3 monitors? That Was stuning. I wonder is there more of other old games. All retro games are cool
"Other hardware" Those pretty blue towers
We've got gamers nexus Steve at home
Now you can goon even harder yippieeee
Thanks for the in depth int-ertainment!
400W a network switch?!? Wtf
Most of that is probably power budget for PoE output.
That's how old enterprise gear runs. They don't need to care about power usage. (fans will be a large part of that power)
@@kazuyachan8212 PoE on an SFP+ only switch?
Networking is piss easy. You take the thing, you stick it in the other thing. What's so difficult about it?
*A WILD NETWORK PRINTER APPEARS*
No... No.... NOOOOOOOOOOO!!!!!!!!!!!
I hate networking and IT stuff in general. It is not just the middle management of computing, it's the used-car sales force of computing. :) You have the patience of Job to do some of those things.
Network engineer here in Gilbert if you ever need any networking help or advice
enable jumbo packets to help reaching faster speeds on your 40gig ethernet...... oops just saw that you set jumbo mtu
Ah yes, Microtik, comes with free backdoors. As for fiber issues, have you checked dmesg for any nic issues, or checked the power levels of the fiber?
Epyc chips still have numa nodes between the chiplets.
Other vendors call their backdoor "cloud integrated". But yes, while amazing price/performance, Mikrotik are not security-by-default, which requires considerable legwork on the operators end to create.
Epyc (and Threadripper, and to an extent Ryzen) is wild in the NUMA-department. I have been told in very clever words that that is pain for software devs in Windows-land.
nice rack :D
Who needs 10g at home? For what?
As a pure server novice myself, I feel stressed every time I see the internals of a rackmount server. Just give me a happy full tower case and I'll be content. That being said, I hope I will at some point be able to justify a nice 4U for home. I will master my fears.
Is every IT guy has such a Mess in the workshop?
Yupp
Who's hear watching in 2024 ?
👾 xx*
The cool thing about this is ... nothing.. literally nothing..
Grouch