My first cheap crimp Patch Panel. Spent so much time debugging and rewiring only to realize that the problem wasn't my crimping, but the patch panel itself 🤦🏻♂ I've upgraded to a Keystone one and it's been smooth sailing since.
The other comment complaining about this video disappeared - but I wanted to tell that person that the occasional hardware video is appreciated as well. Not all of us are pure software fans, so there is an audience for content of this kind.
I'd like to second your comment as i commented below the now deleted comment. I appreciate both, specific homelabbing content dealing with software and services and more general content like this video.
You can't have the software without hardware. Anyone who develops software will be able to write more reliable and efficient code when they understand hardware and networking.
Tim’s channel is primarily a Homelab channel. A huge chunk of this hobby is hardware. If you’re just interested in software solutions, I’d say you are not into Homelabbing: you’re into whatever software solution you’re trying to achieve. And you know what? That’s perfectly fine, but it’s only part of Tim’s (and most of us’) hobby (and channel). ¯\_(ツ)_/¯
I have a homelab that is converged onto one server running dual 16 core Opteron 6380s with 256GB of Ram and 8 SSDs in a Raid-Z2. I get crap all the time for the vintage of the CPUs, but they handle parallel workloads easily and the lack of SAS spinners means that power consumption is pretty decent.
With the UDM Pro you are able to change the 10G wan to a Lan port. Click on that port, and it will bring you to a screen where you can change it to a LAN port. It works in 1.12.22
Regarding the UPS. It should be possible for the servers to monitor the UPS input power state. Most UPS can give out this information. And when external power disapears the servers shutdown themself before the 10min runtime of the UPS runs out.
I have a patch panel similar to that where I use keystone couplers and I don't regret it one bit. If I was running something more than gig I might care, but it's so convenient. you can even swap out those for other keystones like USB or display if you want.
I have several used DELL servers. My latest one is an R730 with 2x 16-core XEON CPUs (E5-2698 v3 @ 2.30GHz), 256G DDR4 ECC RAM., 2xSPF+, 2x 1G NIC, 8x 2.5" SAS Drive bays for about $1700. I purchased SSDs separately and don't remember how much those cost.
That Tripp Lite can be monitored via USB or Serial. Things like NUT in like pfsense or TrueNAS or plain Linux can then be used to trigger a safe shutdown of HW.
Really not a production redy, proxmox is not HA and with only 2 nodes you really get limited. A DellC6100 with Proxmox + CephServer (SSD ofcourse) would be much cheaper and you would be capable of having 24 cores by node. Also having SFP 10Gbps would be a must for the CEPH server and this would save you the cost of this POE. Change the Raspberry for Labrador boards. Rasps are a hyped board. Really doesnt make sense. only my opinion one point of view.
@@TechnoTim if you have only two nodes is not HA. You have a single point of failure. Simulate a power supply outage. Imagine a hospital service, would you relly on a single power supply node?
I love your stuff but dude you talk way too much about stuff that no one cares get on with a video I'm here for the hardware not the explanation of oh am I regretting what I did you spent five freaking hours explaining why you chose it get driving me nights
Yeah I'm kind of sorry for that I was a little upset with someone else and I took it out on you also I was a little drunk but I wanted to apologize if you want I'll gladly take down the comment not a big deal
Actually with the new firmware Version on the UDM Pro you can assign both SFP+ Ports to be LAN or WAN - I am so happy they finally added that feature !!
Yeah, the new feature is really handy. In the EA firmware it’s even better. But you have to be weary of how the ports are wired. To get from any of the ports except for between the gigabit switch ports you have to go through the CPU so I’d probably skip using them if you want high speed networking across the two 10G ports - they aren’t a switch.
To be honest, I rarely regret anything in life. I think, when you make a choice, you make it based on what you know, the resources you have, and what IS AVAILABLE at the time (any choice in life), so you don't really have much of a choice, because 100 times out 100 times, you would choose 'that option' FOR THE 1st TIME, based on that specific situation. We learn from our mistakes!
And I think learning from these mistakes is also good to know what not to buy in the future. As long as you don't waste money, but make a rational decision to begin with, you either pay for a good product or pay for a lesson.
I remember you recently made a NUT video about being able to safely power down all your device connected to the UPS, but I saw in this video you said you only have 10 min of runtime before you had to hurry and power down everything. Are some of your servers/components not connected through USB/etc. to NUT so they power down automatically?
And at 5:55 we can see 50% of the available memory bandwidth not being used. Move the two modules in the black slots over to the two blue slots on the other side of the socket. Then you will use all 4 available memory channels instead of just 2. I myself have recently gotten into setting up my own home network/computing lab for better or worse... Currently though just have one 2U server and no rack for it yet. But one has to start somewhere...
@@TechnoTim Information regarding channel layout tends to exist in the motherboard/system manual. Usually alongside a "recommended population order." Some motherboards also print: A1, A2, B1, B2, C1, C2, D1, D2. Or A1, B1, A2, B2, A3, etc. Or other fancy numbering systems. But if the naming/numbering doesn't make much sense, then it is best to check the manual. Though, then there is some cheap 2011(-3) boards that only uses 2 out of the four channels regardless, despite often have 4 memory slots...
Appreciate you giving your thoughts on the equipment decisions you've made over time. For me, and I know I'm not alone in this, I like to skip straight to the "perfect setup" to avoid some of the regrets and pain points. That said, these kinds of decisions are so subjective because everyone's needs, goals, and resources are different. At the end of the day, the things we look back on as mistakes are often the most valuable aspects of the learning process. Experience is the greatest teacher, so while it is wise to learn from other's mistakes, we shouldn't look to avoid the growth that comes from making our own. Congrats on the channel growth, and thank you for bringing us along with you on your journey.
2 comforting thoughts for me are that 1) everything is always changing: your needs, your knowledge, (software and hardware) products available in the market, energy prices and the time you have available for this. And 2) bad choices you make in only software (so not hardware, which is by its nature static) will have to be “fixed” with you spending time, perhaps your most valuable resource. Therefore it is best to not always want it to be perfect (because, in terms of continuity, that is impossible) but rather see it as a piece of art at which you can chisel away, learning stuff along the way.
A big issue with "perfect setup" from my current journey, is that the time it takes to afford and get everything technology is already passing you by and you are losing that ability to experience home labbing. I am a victim of this and i wish i could just dive into home labbing. For me its the networking side that is going to cost the most while the initial case for storage is my other hurdle everything else i can pick and chose as needed
I like your patch panel, but I as of yet do not have any experience crimping RJ-45s . Some day I plan on getting a crimper and trying but is going to have to wait right now.
Cool to see your rack, but it would be interesting to have seen the price you paid then versus now as a part of the "keep" or "upgrade". Cost and Energy Usage are two factors that add up over time.
Most of my “upgrades” were due to power consumption and I paid retail for most things which hasn’t changed. I have them all listed in the gear recommendations link!
@@antoarre not that I’m aware. Tim has said soon for sharing that info but when will that be I can’t say. I’m hoping the next video or the video after that.
@@josephmyers5956 Thanks for the info. I'll be sure to double check the upcoming videos for it. I have been torn getting a disk shelf, but knowing the power draw, I'm interested in alternatives. Thanks again.
I prefer to buy older end of life data center switches and network equipment. I bought a Juniper 48 port switch for $50, and all 48 ports are PoE+ 1gig ports, plus the unit has two 10gig SFP+ port. I did the same thing for my UPS, I bought a used APC smt2200rm2u for $200 and it came with brand new batteries installed. I prefer my UPS to be compatible with APCUPSD so that all my equipment automatically shuts down when the UPS battery gets down to a certain level that I can set for each device during a power outage. I'm even thinking about buying another end of life switch right now too, as I can get a Celestica Seastone DX010 32-Port 100G QSFP28 switch for less than $500 on eBay right now! I have NO idea what I would use 100gig networking for, but each port can also be broken out to two 50gig ports, or four 25gig ports, so I could probably make use of it somehow!
@@vincentnthomas1 slightly, yes. But not as much as you'd think. If you compare new switches with as many ports and the same features (such as poe), the used older switches only use a tiny bit more power.
What on earth are you using that much RAM for? You have .5 TB of ram, which means each hypervisor instance uses about 50 GB of RAM (and 1 to 3 CPU cores????).
The network setup is nice. MicroTik is my go-to because it packs every enterprise tool you could need for a home or small business into an extremely reliable appliance.
Curious what you would choose for your disk shelf now. Go the 45drive route? I kind of want to buy one of their cases and backplane to build my own but i doubt that is affordable in disk shelf terms.
I have decided to go TP-Link Omada. The functionality for the most part is very comparable (central management) and the WiFi APs as well as managed/POE+ switching is far cheaper. I am just not throwing caution to the wind and buying the best-of-the-best right out of the gate, as Omada is fairly new when compared to Unify and TP-Link are adding new hardware all of the time, so I do plan on upgrades in the future as I cycle older switching components out of the rack as I buy new Omada stuff. I have limited myself to a 22U rack and will NOT upgrade it.
Lol.. almost anything you talk about is vastly wayyyy over the average guy's head... price wise... throwing thousands of dollars at a single switch is definitely not for everyone.. But I know.. you are talking about your equipment..
BTW you can add that 10gig wan to a lan (Ports/Port Management/click on port 8 set as wan2 click on port 10 aka 10gbs wan set as lan) and done you have 10gbs lan ethernet ports
For those that dont need huge amount of drives I use the EMC KTN-STL3 and it idles with no drives at 30w. Each drive adds around 7watt either sas or sata.
Nice video as always! If you do decide to upgrade your rack for a few more slots you have probably seen the StarTech 25U rack is real popular in the homelab community and isn't horribly expensive. I have been using it for over a year and it's been great.
Hi Tim; I'm wondering why you would ever need more than say 2 VMs on your home lab server. I do like the choices you made for your equipment; especially units which have blinking or rolling lites. Give the whole rack unit some bling. I'm currently using the synology 918+ which fits my needs fine, however I did want to install a VM so I could run a version of windows outside my regular mac OS. I couldn't seem to git it configured completely and maybe it might be due to the location I'm currently living in.
I cant see spending $1100 on a 48 port POE switch when 4x+ 10gSPF+ and more bandwidth you could imagine in a lab from a retired enterprise switch like a ICX 6450 for less than $200. POE++ is the only thing I dont ever see on old switches but that is more for PTZ cameras anyway.
If i could reset my home lab, i would buy an Aruba 24 ports poe switch instead of the non poe version. I would get a supermicro 2u server instead of a 1u. The 1u version is very noisy and lack space for a second expansion card.
I personally regret not getting a better UPS that's actually rack based. Using the keyholes to wall mount a UPS probably isn't the best plan. Any rack mount UPS recommendations? Is Li-Ion worth it?
I saw a server that was 2U maybe 3U, It’s split into four quadrants on the back and each quadrant can be pulled out and have a complete server built into it and then it slides in. So in theory you can run for servers in two units of rack space. I think if I were to start over again I would’ve saved some money on my ubiquity gear by buying the dream machine pro SE over the dream machine pro, and I would have started with the 24port enterprise switch instead of starting with the 24 port switch than the 24 port pro switch then moving to the 24port enterprise switch. The latter of which gives me POE and a handful of 2.5 Gb ports as well. Currently my steam machine and my server or connected up with 2.5 Gb
I have a tip! if you need more rack space, extend the rack to the max length and put stuff on the back side. things like patch panel or PDU or shorter switches can use the same U front/back
All home labs come with regrets..just the nature of buying hardware.. count the learning experience as the only non regret..that’s valuable,unlike owning hardware
I'm surprised you do a manual shutdown on power failure, wouldn't it be kind of easy to use something like Network UPS Tools to auto-shutdown on power failure?
I recently dropped my server from the rack in favor of my AWS account. I can essentially do anything I want for the flavor of the month something something I am working on. And I can dictate exactly what I need.
I was waiting to hear the make and model of the PC case and the server rack while I was listening to ( not watching) the video, and they never came. Anyone know the make model of these items ?
unless you have a really good connection to the internet from home with high uplink and plannning to provide some sort of hosting services i see 0 reason to start a home lab. It so much easier to learn terraform and setup stuff up from there. Just spin up stuff, use it for whatever you need that moment, then destroy it. Maybe electricity where you are free, but in most of where i am, it costs so much its cheaper to shell out 20-30 dollars for temporary instances when i need them. And it requires very little upfront investment.
Great video Tim! Do you have thoughts on the time horizon for future-proofing, for when you might be trading off technological improvements over time? Like for your switch regrets/recommendation, if I think I will need more than 24 ports but maybe not for 5(?) years or so, is it better to buy the 48 port now that will fit my needs in 5 years, or buy 24 now and either another 24 or a 48 replacement in 5 years, when the tech has improved by 5 years, when maybe at that point all the ports are 10gig with some 100gig uplinks? What sort of time frame do you consider to be too far into the future where the future-proofing becomes compromising?
This is a tough call. What I learned is that adding capacity is relatively cheap if you do it up front but expanding it later can be expensive (You have to rebuy) Adopting new tech is also expensive. I try to balance this out. I wish there were a great answer to this but I just try to do the calculus when I buy. For instance, there is a 48 port switch is is PoE and also 2.5 Gbs. I seriously considered it but then looked at the price. It was a 1/3 more in price for a tech I probably won't use, especially if I am plugging in PoE devices which are typically 1Gbs at most. It's always a case by case basis and I will be sure to continue to share my findings in the future!
I wish I hadn't run cat6a to my office for 10gig, those SFP+ RJ45 modules get really hot! Also I think you can reassign that 10gig port on your UDM Pro to LAN with a beta firmware.
For me I view Ubiquity as the Apple of networking. Which you may call a compliment but I don;t mean it like that. I mean it as its the locked down walled garden ecosystem that I cannot stand to use. I'm just getting started into homelabbing right now and haven't gotten any real equipment yet, I only have a basic prosumer grade router but I want to custom build a 1U box to run pfsense. And then I want to custom build a 4U server with the front covered in drive bays to install TruNas and run any docker containers and VMs directly on that too.
I do have same thought couple times a month. Should i start all over again with my Proxmox Server and all VMs. A lot of things i would do in different way and more optimised. But then i realise that current setup is already a refresh i did 10 - 12 months ago when i had same thought to start from 0 and do better. Any setup is a great setup as long as you can easily find problems and fix them.
Why would you put your 4 SSDs in RaidZ2 instead of a mirror configuration? If you're going to have half capacity anyway, you can save yourself the parity calculations and speed up your storage drastically.
Thanks so much for this video. I'm looking for a completed inventory that can help me buy everything I need to build my own server. Can you share those infos?
What bothers me the most is the LED strip: You can see the single light spots because they are so far apart from each other. In my rack, I've used a 6000K (cold white) strip with COB LEDs where the elements are much closer to each other - so it's like one long light source.
The nice thing about the Aruba 1930-24-poe is its all 24 poe + ports and has 4 x SFP+ ports !! and here's the kicker ! its life time warranty ANNNNDDDDDD cheaper LOL !
Worth noting you can also run multiple battery backups. With multiple battery backups you can separate items by importance. My main network runs on my bigger backup and my less important stuff runs on the smaller unit. I do split my redundant power supplies on my servers between both backups though.
This small touch screen looks very easy to use, your explanation is very attentive, very careful, and very friendly to beginners, MicroTik is my go-to!
instead of the supermicro 1U's, it sounds like a couple of used Dell R630's would have been a better fit for you. Dual socket 2011-3, 32 DIMM slots, tons of 2.5" hot swap at the front. They're pretty cheap, you might still think about getting them.
What's one thing you regret buying for your HomeLab??? 😭
Budget always not enough
I regret only this, I buy UDM pro and after 4 days later UDM SE has been announced :D
My first cheap crimp Patch Panel. Spent so much time debugging and rewiring only to realize that the problem wasn't my crimping, but the patch panel itself 🤦🏻♂ I've upgraded to a Keystone one and it's been smooth sailing since.
@@LegionInfanterie no open refund?
Not getting a bigger appartment so i could have space for a rack :*(. For now i got to love with tower servers
The other comment complaining about this video disappeared - but I wanted to tell that person that the occasional hardware video is appreciated as well. Not all of us are pure software fans, so there is an audience for content of this kind.
I'd like to second your comment as i commented below the now deleted comment. I appreciate both, specific homelabbing content dealing with software and services and more general content like this video.
You can't have the software without hardware. Anyone who develops software will be able to write more reliable and efficient code when they understand hardware and networking.
Tim’s channel is primarily a Homelab channel. A huge chunk of this hobby is hardware. If you’re just interested in software solutions, I’d say you are not into Homelabbing: you’re into whatever software solution you’re trying to achieve. And you know what? That’s perfectly fine, but it’s only part of Tim’s (and most of us’) hobby (and channel). ¯\_(ツ)_/¯
Thank you!
@@levifig 100% agree. Its not a homelab without a rack of your own hardware . And we like the content.
I have a homelab that is converged onto one server running dual 16 core Opteron 6380s with 256GB of Ram and 8 SSDs in a Raid-Z2. I get crap all the time for the vintage of the CPUs, but they handle parallel workloads easily and the lack of SAS spinners means that power consumption is pretty decent.
With the UDM Pro you are able to change the 10G wan to a Lan port. Click on that port, and it will bring you to a screen where you can change it to a LAN port. It works in 1.12.22
Regarding the UPS. It should be possible for the servers to monitor the UPS input power state. Most UPS can give out this information. And when external power disapears the servers shutdown themself before the 10min runtime of the UPS runs out.
I have a patch panel similar to that where I use keystone couplers and I don't regret it one bit. If I was running something more than gig I might care, but it's so convenient. you can even swap out those for other keystones like USB or display if you want.
I have several used DELL servers. My latest one is an R730 with 2x 16-core XEON CPUs (E5-2698 v3 @ 2.30GHz), 256G DDR4 ECC RAM., 2xSPF+, 2x 1G NIC, 8x 2.5" SAS Drive bays for about $1700. I purchased SSDs separately and don't remember how much those cost.
That Tripp Lite can be monitored via USB or Serial. Things like NUT in like pfsense or TrueNAS or plain Linux can then be used to trigger a safe shutdown of HW.
The new Unifi Network allows you to reassign that port. Also port 8 can be added to the mix on UDM Pro.
Fibrain makes a .5U patchpanel that uses casettes. Just a thought
ive read that UDM Pro WAN can be changed into LAN.
not having enough cooling in the server closet
Really not a production redy, proxmox is not HA and with only 2 nodes you really get limited. A DellC6100 with Proxmox + CephServer (SSD ofcourse) would be much cheaper and you would be capable of having 24 cores by node. Also having SFP 10Gbps would be a must for the CEPH server and this would save you the cost of this POE. Change the Raspberry for Labrador boards. Rasps are a hyped board. Really doesnt make sense. only my opinion one point of view.
I don’t need HA vms, I have HA services with Kubernetes
@@TechnoTim if you have only two nodes is not HA. You have a single point of failure. Simulate a power supply outage. Imagine a hospital service, would you relly on a single power supply node?
I love your stuff but dude you talk way too much about stuff that no one cares get on with a video I'm here for the hardware not the explanation of oh am I regretting what I did you spent five freaking hours explaining why you chose it get driving me nights
Funny, that’s all the stuff that everyone loves
Yeah I'm kind of sorry for that I was a little upset with someone else and I took it out on you also I was a little drunk but I wanted to apologize if you want I'll gladly take down the comment not a big deal
I honestly forgot about this post
Actually with the new firmware Version on the UDM Pro you can assign both SFP+ Ports to be LAN or WAN - I am so happy they finally added that feature !!
Yeah, the new feature is really handy. In the EA firmware it’s even better. But you have to be weary of how the ports are wired. To get from any of the ports except for between the gigabit switch ports you have to go through the CPU so I’d probably skip using them if you want high speed networking across the two 10G ports - they aren’t a switch.
O0o0o0. I didn't know that. Must try it
You know you can change the SFP+ 10gbe WAN point on the UDM Pro to a LAN port? It’s a setting in the GUI
If so, that must be a brand new feature, as I see no such option on mine, but I also haven't updated the firmware in a little while.
@@Blooest there’s a specific firmware update that enables it to be switched
To be honest, I rarely regret anything in life. I think, when you make a choice, you make it based on what you know, the resources you have, and what IS AVAILABLE at the time (any choice in life), so you don't really have much of a choice, because 100 times out 100 times, you would choose 'that option' FOR THE 1st TIME, based on that specific situation.
We learn from our mistakes!
And I think learning from these mistakes is also good to know what not to buy in the future. As long as you don't waste money, but make a rational decision to begin with, you either pay for a good product or pay for a lesson.
As of the newest Unifi Update, you can now reassign that WAN port to be a LAN port!
Now this is cool! Thanks for the info
I remember you recently made a NUT video about being able to safely power down all your device connected to the UPS, but I saw in this video you said you only have 10 min of runtime before you had to hurry and power down everything. Are some of your servers/components not connected through USB/etc. to NUT so they power down automatically?
And at 5:55 we can see 50% of the available memory bandwidth not being used. Move the two modules in the black slots over to the two blue slots on the other side of the socket. Then you will use all 4 available memory channels instead of just 2.
I myself have recently gotten into setting up my own home network/computing lab for better or worse... Currently though just have one 2U server and no rack for it yet. But one has to start somewhere...
Thank you! Since then all slots are now filled but good to know!
@@TechnoTim Information regarding channel layout tends to exist in the motherboard/system manual. Usually alongside a "recommended population order."
Some motherboards also print: A1, A2, B1, B2, C1, C2, D1, D2. Or A1, B1, A2, B2, A3, etc. Or other fancy numbering systems. But if the naming/numbering doesn't make much sense, then it is best to check the manual.
Though, then there is some cheap 2011(-3) boards that only uses 2 out of the four channels regardless, despite often have 4 memory slots...
About your NetApp, look up the difference in power supplies on it, the newer PSU's draw much less power.
Appreciate you giving your thoughts on the equipment decisions you've made over time. For me, and I know I'm not alone in this, I like to skip straight to the "perfect setup" to avoid some of the regrets and pain points. That said, these kinds of decisions are so subjective because everyone's needs, goals, and resources are different. At the end of the day, the things we look back on as mistakes are often the most valuable aspects of the learning process. Experience is the greatest teacher, so while it is wise to learn from other's mistakes, we shouldn't look to avoid the growth that comes from making our own.
Congrats on the channel growth, and thank you for bringing us along with you on your journey.
2 comforting thoughts for me are that 1) everything is always changing: your needs, your knowledge, (software and hardware) products available in the market, energy prices and the time you have available for this. And 2) bad choices you make in only software (so not hardware, which is by its nature static) will have to be “fixed” with you spending time, perhaps your most valuable resource. Therefore it is best to not always want it to be perfect (because, in terms of continuity, that is impossible) but rather see it as a piece of art at which you can chisel away, learning stuff along the way.
A big issue with "perfect setup" from my current journey, is that the time it takes to afford and get everything technology is already passing you by and you are losing that ability to experience home labbing. I am a victim of this and i wish i could just dive into home labbing. For me its the networking side that is going to cost the most while the initial case for storage is my other hurdle everything else i can pick and chose as needed
I'd love to have MicroCenter in the Netherlands...=
Press F to pay respect to the old Dell R710 that became an awesome empty slot. Where did it go Tim?
I gave it a new good home, and sold it to a new, young homelabber willing to learn in the Mpls area!
I like your patch panel, but I as of yet do not have any experience crimping RJ-45s . Some day I plan on getting a crimper and trying but is going to have to wait right now.
Cool to see your rack, but it would be interesting to have seen the price you paid then versus now as a part of the "keep" or "upgrade". Cost and Energy Usage are two factors that add up over time.
Most of my “upgrades” were due to power consumption and I paid retail for most things which hasn’t changed. I have them all listed in the gear recommendations link!
What would you get instead of the disk shelf then to house your drives?
That is what I wondering too. I’m in the process of looking for a disk shelf so I’m really curious
Did Tim hint at what he would have replaced the shelf with?
@@antoarre not that I’m aware. Tim has said soon for sharing that info but when will that be I can’t say. I’m hoping the next video or the video after that.
@@josephmyers5956 Thanks for the info. I'll be sure to double check the upcoming videos for it. I have been torn getting a disk shelf, but knowing the power draw, I'm interested in alternatives. Thanks again.
@@antoarre No problem. I'm in the same boat too. I have been thinking of getting one too but would like to know what the alternative would be too.
Why don't I crimp my own cabling? Because it's tedious and not fun lol
I prefer to buy older end of life data center switches and network equipment. I bought a Juniper 48 port switch for $50, and all 48 ports are PoE+ 1gig ports, plus the unit has two 10gig SFP+ port. I did the same thing for my UPS, I bought a used APC smt2200rm2u for $200 and it came with brand new batteries installed. I prefer my UPS to be compatible with APCUPSD so that all my equipment automatically shuts down when the UPS battery gets down to a certain level that I can set for each device during a power outage. I'm even thinking about buying another end of life switch right now too, as I can get a Celestica Seastone DX010 32-Port 100G QSFP28 switch for less than $500 on eBay right now! I have NO idea what I would use 100gig networking for, but each port can also be broken out to two 50gig ports, or four 25gig ports, so I could probably make use of it somehow!
Wouldn’t it be less efficient in power?
@@vincentnthomas1 slightly, yes. But not as much as you'd think. If you compare new switches with as many ports and the same features (such as poe), the used older switches only use a tiny bit more power.
What on earth are you using that much RAM for? You have .5 TB of ram, which means each hypervisor instance uses about 50 GB of RAM (and 1 to 3 CPU cores????).
2 servers, each with 256GB ram each, 10-15 vms on each server :)
The network setup is nice. MicroTik is my go-to because it packs every enterprise tool you could need for a home or small business into an extremely reliable appliance.
Curious what you would choose for your disk shelf now. Go the 45drive route? I kind of want to buy one of their cases and backplane to build my own but i doubt that is affordable in disk shelf terms.
What disk shelf would you recommend?
Soon!
I have decided to go TP-Link Omada. The functionality for the most part is very comparable (central management) and the WiFi APs as well as managed/POE+ switching is far cheaper. I am just not throwing caution to the wind and buying the best-of-the-best right out of the gate, as Omada is fairly new when compared to Unify and TP-Link are adding new hardware all of the time, so I do plan on upgrades in the future as I cycle older switching components out of the rack as I buy new Omada stuff. I have limited myself to a 22U rack and will NOT upgrade it.
Lol.. almost anything you talk about is vastly wayyyy over the average guy's head... price wise... throwing thousands of dollars at a single switch is definitely not for everyone.. But I know.. you are talking about your equipment..
BTW you can add that 10gig wan to a lan (Ports/Port Management/click on port 8 set as wan2 click on port 10 aka 10gbs wan set as lan) and done you have 10gbs lan ethernet ports
For those that dont need huge amount of drives I use the EMC KTN-STL3 and it idles with no drives at 30w. Each drive adds around 7watt either sas or sata.
I think on the latest unifi update you can now swap the 10gBe port to LAN now!
Nice video as always! If you do decide to upgrade your rack for a few more slots you have probably seen the StarTech 25U rack is real popular in the homelab community and isn't horribly expensive. I have been using it for over a year and it's been great.
Hi Tim; I'm wondering why you would ever need more than say 2 VMs on your home lab server. I do like the choices you made for your equipment; especially units which have blinking or rolling lites. Give the whole rack unit some bling. I'm currently using the synology 918+ which fits my needs fine, however I did want to install a VM so I could run a version of windows outside my regular mac OS. I couldn't seem to git it configured completely and maybe it might be due to the location I'm currently living in.
I cant see spending $1100 on a 48 port POE switch when 4x+ 10gSPF+ and more bandwidth you could imagine in a lab from a retired enterprise switch like a ICX 6450 for less than $200. POE++ is the only thing I dont ever see on old switches but that is more for PTZ cameras anyway.
If i could reset my home lab, i would buy an Aruba 24 ports poe switch instead of the non poe version. I would get a supermicro 2u server instead of a 1u. The 1u version is very noisy and lack space for a second expansion card.
I personally regret not getting a better UPS that's actually rack based. Using the keyholes to wall mount a UPS probably isn't the best plan. Any rack mount UPS recommendations? Is Li-Ion worth it?
I’m wondering about this too
Here's the one I have! amzn.to/2XGN6yt
@@TechnoTim thank you. I have been looking at this one though I keep seeing people mention getting one from APC but they are definitely pricer.
I saw a server that was 2U maybe 3U, It’s split into four quadrants on the back and each quadrant can be pulled out and have a complete server built into it and then it slides in. So in theory you can run for servers in two units of rack space.
I think if I were to start over again I would’ve saved some money on my ubiquity gear by buying the dream machine pro SE over the dream machine pro, and I would have started with the 24port enterprise switch instead of starting with the 24 port switch than the 24 port pro switch then moving to the 24port enterprise switch. The latter of which gives me POE and a handful of 2.5 Gb ports as well. Currently my steam machine and my server or connected up with 2.5 Gb
I have a tip! if you need more rack space, extend the rack to the max length and put stuff on the back side. things like patch panel or PDU or shorter switches can use the same U front/back
What wattage are we talking about Tim?
All home labs come with regrets..just the nature of buying hardware..
count the learning experience as the only non regret..that’s valuable,unlike owning hardware
I'm surprised you do a manual shutdown on power failure, wouldn't it be kind of easy to use something like Network UPS Tools to auto-shutdown on power failure?
I recently dropped my server from the rack in favor of my AWS account. I can essentially do anything I want for the flavor of the month something something I am working on. And I can dictate exactly what I need.
I was waiting to hear the make and model of the PC case and the server rack while I was listening to ( not watching) the video, and they never came.
Anyone know the make model of these items ?
unless you have a really good connection to the internet from home with high uplink and plannning to provide some sort of hosting services i see 0 reason to start a home lab. It so much easier to learn terraform and setup stuff up from there. Just spin up stuff, use it for whatever you need that moment, then destroy it. Maybe electricity where you are free, but in most of where i am, it costs so much its cheaper to shell out 20-30 dollars for temporary instances when i need them. And it requires very little upfront investment.
Your supermicro boards with the DIMM's are populated wrong, consult the manual of the X10sri. A1 B1 C1 D1 should be populated ;) (blue slots)
I’ve since filled them up! Good eye!
I have that 16 Poe switch. You are forgetting that switch can deliver only 42W in total 👎
If you signup for the free SSD from Microcenter get ready to be spammed multiple times daily.
So you mentioned upgrade to the disk shelf. But with what? Another full fledge server?
Is it just me, or is the sound in this video really bad? Echo or hollow... hard to say
What would you replace the disk shelf with? another server that just has a lot of slots?
Soon!
@@TechnoTim soon as in yes? or soon as in there is a video about this coming soon? :P
I never under stood for what reason people run so many VM's. Anyone give me some examples?
Techno Tim - Unifi Protect 10 Gbps wan port can be used as LAN
Is there a way to speak to you directly? I have an idea for you and don't want to broadcast it in the comments lol.
Regret no. It all learning and early days as important as well hardware that year started limited funds. It a lot R&D.
I got 2 of these chenbro's and ya, they are absolutelly horrible for cable management.
why isnt the cisco network switch screwed in???
It's going on an extended vacation soon!
ETS makes a 0.5U 24-port passthrough cat6 patch panel
Hehe. Rackomend it...
Great video Tim! Do you have thoughts on the time horizon for future-proofing, for when you might be trading off technological improvements over time? Like for your switch regrets/recommendation, if I think I will need more than 24 ports but maybe not for 5(?) years or so, is it better to buy the 48 port now that will fit my needs in 5 years, or buy 24 now and either another 24 or a 48 replacement in 5 years, when the tech has improved by 5 years, when maybe at that point all the ports are 10gig with some 100gig uplinks? What sort of time frame do you consider to be too far into the future where the future-proofing becomes compromising?
This is a tough call. What I learned is that adding capacity is relatively cheap if you do it up front but expanding it later can be expensive (You have to rebuy) Adopting new tech is also expensive. I try to balance this out. I wish there were a great answer to this but I just try to do the calculus when I buy. For instance, there is a 48 port switch is is PoE and also 2.5 Gbs. I seriously considered it but then looked at the price. It was a 1/3 more in price for a tech I probably won't use, especially if I am plugging in PoE devices which are typically 1Gbs at most. It's always a case by case basis and I will be sure to continue to share my findings in the future!
Trying to run VMs on hard disk storage. Should have got more SSD storage.
I wish I hadn't run cat6a to my office for 10gig, those SFP+ RJ45 modules get really hot!
Also I think you can reassign that 10gig port on your UDM Pro to LAN with a beta firmware.
It's in the released firmware now.
get solar pannels charge up the batteries and you have extra batteries :) for the whole house or just for the servers
Unifi Dream machine firmware was just updated to allow this! "UDMP Port 10 SFP+ can now be assigned to LAN! (Running 7.1.66)"
😱
First
For me I view Ubiquity as the Apple of networking. Which you may call a compliment but I don;t mean it like that. I mean it as its the locked down walled garden ecosystem that I cannot stand to use.
I'm just getting started into homelabbing right now and haven't gotten any real equipment yet, I only have a basic prosumer grade router but I want to custom build a 1U box to run pfsense. And then I want to custom build a 4U server with the front covered in drive bays to install TruNas and run any docker containers and VMs directly on that too.
What server do you plan to use?
I do have same thought couple times a month. Should i start all over again with my Proxmox Server and all VMs. A lot of things i would do in different way and more optimised. But then i realise that current setup is already a refresh i did 10 - 12 months ago when i had same thought to start from 0 and do better.
Any setup is a great setup as long as you can easily find problems and fix them.
Why is it called a hOme lab? Not a home network rack?
I’m setting mine up now. Unifi is that patch panel with couplers? That’s what I’m doing
See my video on what is a home lab, and welcome!
Why would you put your 4 SSDs in RaidZ2 instead of a mirror configuration? If you're going to have half capacity anyway, you can save yourself the parity calculations and speed up your storage drastically.
Sorry! I meant a mirrored stripe! RAID10!
I started watching, but gave up. Horrible distracting background music. Thumbs down.
sorry
Thanks so much for this video.
I'm looking for a completed inventory that can help me buy everything I need to build my own server.
Can you share those infos?
kit.co/TechnoTim
8:30 I'd just put a 2U blank panel there to fill the gap. Costs about 10 bucks and looks neat.
Speaking of neat looking: Rackstuds! 🤓
What bothers me the most is the LED strip: You can see the single light spots because they are so far apart from each other. In my rack, I've used a 6000K (cold white) strip with COB LEDs where the elements are much closer to each other - so it's like one long light source.
Which equipment is your favourite one among all ? 😀😜
Looking at previous posts, do you still use your Sophos XG firewall or did you completely go with the UDM Pro firewall?
Nope, check out my latest tour!!
lol you seemed really shaky at 6:34 holding that CPU bare handed
So half height 1u patch panels. I've got two of those from Trendnet. But you have to use a punch down tool because it's too compact fire keystones.
Thanks for the tip!
What!? A Pro needing to run and power things down after a power outage?
with the latest firmware for the udm pro you can erasigne the outher wan port as a lan port
Hey man, good video just some feedback.
The echo kills your voice, definitley look at something to resolve that one :)
Working on it! Thank you!
fwiw, 37U racks fit through a door even on casters. It made moving from my old apartment easier.
Great tip!
The nice thing about the Aruba 1930-24-poe is its all 24 poe + ports and has 4 x SFP+ ports !! and here's the kicker ! its life time warranty ANNNNDDDDDD cheaper LOL !
Are you running a cp server with all rhat equipment
you can buy 48 port pacht (its 1u and keystone)
do raidz1
keystones over wiring anyday, so much easier!
Worth noting you can also run multiple battery backups. With multiple battery backups you can separate items by importance. My main network runs on my bigger backup and my less important stuff runs on the smaller unit. I do split my redundant power supplies on my servers between both backups though.
What upgraded disk shelf would you go with?
What upgraded disk shelf would you go with?
My rack is a 25U startech. Wouldn't you know it, a Dell Poweredge R720XD followed me home from ebay the other day.
I hate it when that happens! 😉
@@TechnoTim Considering that they are not that loud when not under full load, I'm looking to adopt another one soon.
would have been nice to know what you paid for and what you got freebee.... and total cost if purchased.
I paid for 100% of everything in this rack. No freebies
This small touch screen looks very easy to use, your explanation is very attentive, very careful, and very friendly to beginners, MicroTik is my go-to!
While watching I constantly keep thinking: what do you use it all for? 😅
Check out my services Tour!
instead of the supermicro 1U's, it sounds like a couple of used Dell R630's would have been a better fit for you. Dual socket 2011-3, 32 DIMM slots, tons of 2.5" hot swap at the front. They're pretty cheap, you might still think about getting them.
What intel cpu did you put in that micro set up?