Since folks are asking a lot about costs. The idea was just to give folks an idea of what is possible and there is a lot of variability here. - ~$500-$515 for the base we used, but there are options for less than that. - Maybe $25-30 for an extra 16GB DIMM. 64GB is like $150-200, 96GB $279 - $29 for the 2.5GbE NIC. $129 for the 10Gbase-T NIC, - SSDs from $50 to $400 each depending on speed/ capacity. I think all-in on the top-end config we would have spent around $1.5K
@@someonegreat6931 DEFINITELY keep your eye on auctions, not just BIN's, esp ones that end at crazy hours, like when people are asleep. I got an HP Elite Desk mini for $140 when similarly configged ones were selling for 250ish. Seems like a lot of people can't be bothered with auctions anymore.
Thank you so much. Just built my homelab based on this setup. Affordable, great performance, small footprint, low power usage. Couldn't be more pleased.
Unfortunately in Brazil it is impossible to achieve a setup like that in an affordable way or even to find the parts like the hp flex nic. I like the video btw. Thank you.
This video was definitely a step up in quality from a lot of the round-ups, or quick looks. Focusing on a single unit of hardware, going into what it has, upgrade paths, and a sample OS configuration/use case is far more useful than just a pile of different mini PCs all mixed together. I think you'll build a higher quality, longer lasting body of work by focusing on these "hardware journeys", and then after you get three or four put together maybe do a year in review, etc. By having a continuum of builds we can all appreciate how hardware evolves and opens up new configuration possibilities, etc. Plus then you have an encyclopedic body of work to further harvest for a "let's review all the cool stuff added over the past three gen!" type videos. Anyway that's just my $0.02 - everyone knocked it out of the park with this one, bravo!
Great input. I think the next one we are going to try this with is the Intel Xeon W-3400/ W-2400 series. We have several different systems, but I think that is going to be a "here is the motherboard, and here are 3-4 different options ranging from relatively less expensive to very fast, very expensive". We will see how that one goes in 2 weeks or so. I will just point out that this costs like 3-4x what we usually spend on a Project TMM review and takes probably 3x as long to produce because of waiting for parts and then trying different setups.
@@ServeTheHomeVideo I like the idea of incorporating HEDT and more Enterprise stuff into HomeLabs. Speaking personally, my background is gamer and PC builder turned professional programmer for the past 13 years. A *lot* of the content I'm interested in involves beefing up my knowledge about enterprise gear. I got tired of having 1GbE for over a decade in my Home, so I started looking at more DIY upgrade paths and found you that way. I would wager there's an entire generation of DIY gamers out there who just don't know what they don't know about Enterprise gear that works in the Home, and I feel like this is probably a goldmine market for yourself and others like Level1Techs who can bring that knowledge to us. I'd love to have 25GbE+ in my home for example, and just be done with upgrading that for another 10+ years. But, how do I go about it? What enterprise gear translates to the home Windows Desktop environment, and which cards run too hot/loud without a rackmount or just don't have drivers? You can see the opportunity here, and it's clear to me I have a *lot* to learn.
Last week you promised and now you delivered! Thanks ! I'm looking to setting up my first virtualization homelab and this is the perfect product for me !
@@FlexibleToast Do you consider using CEPH in a home setup? Or am I missing something? I've managed CEPH storage for some years, and for me you need at least 5 CEPH storage nodes for it to make sense, preferably more. The less storage nodes you have, the bigger part you need to not use. For 3 copies and 5 storage nodes, you can't go above 60% if you want redundancy if a node dies. I got three cephmons, but probably could get away with one in home setup and just use the data on storage nodes to rebuild the cephmon if it dies. We've switched to dual 25gb nics (bonded) some years ago as standard.
@@burre42 yes, I use CEPH in my home setup. Proxmox uses CEPH for its hyperconverging. Proxmox takes away all of the complication of setting up CEPH and it has been super reliable. As for the storage space, of course if you're running only three nodes and you want 3 copies you can only use as much space as is in your smallest node. For a homelab I've never run into that being an issue.
Did something similar at a lower cost. Took my Lenovo M715 with an old Ryzen 2400GE and swapped the Wi-Fi card for a 2.5GbE. Has a 4TB SSD, a 1TB boot nvme, and 16 GB ram. All in for under $350
I was just about to post the wifi-to-2.5Gbit option, then I read your post. Exactly what I would do. Who wants Wifi in our Homelab...lol. Wifi is for our wives/GF's.
@@joeyjojojr.shabadoo915 I was thinking about the 10GbE using the M.2 slot but it would be wasted on the M715. It's always nice to see these videos so I know what I can put in my home lab 3-5 years from now 🤣
@@absolutrichiek I am in that same 3yr+ time frame for some of this hardware...lol. I just upgraded my rack to a 10Gbe backbone (and Wifi6 for the wife) with mixed 2.5Gbe and still 1Gbe client devices elsewhere on my home network.... so I am a bit behind the times.
To all those cannot afford $4k pc labs,, to STH, thank you. I taught myself about PC since 9, I laarned a lot . So can you. W e may not asked for help, but we c an still learn. This video is a prime example, he could ie and make you buy ov erpriced BS he does not. God bles you sir. Saved my pockets and other s.
I appreciate your continued enthusiasm in delivering your presentation. It is evident that you are passionate about the topic and your energy has made the presentation engaging and enjoyable. Your enthusiasm is contagious, and it has certainly made a positive impact on the audience. Keep up the excellent work!
Powerful, sips little power, and at a decent price. What a combination! I would definitely consider buying one of these with all the upgrades already done. Specifically the 10GbE version. I would have a lot of fun setting this up! 🤤👍
sips little power? the cpus TDP is 99w! imho this is a terrible setup, tbf i don't think the reviewer is serious with this build, just trying to max it out but if you were to put consistent load on this thing it'll fail quick, the combined heat of the cpu and 2x nvmes will kill the thing is short order and these cpus in a tiny pc makes no sense, a ryzen u series like a 5800u makes much more sense
@@elduderino7767 While i agree about nvmes - indeed they probably add a lot of heat and may be a gamechanger here. But I don't agree about the part about cpus choice. I mean they are designed for such appliances. If HP chose to sell those with i7 T-series (probably for heavier loads) then i bet it was after some extensive research about temps. Besides in 7:56 you see bottom note that this setup had 45 DAYS uptime - i bet Patrick would report here if anything bad (like extreme temperatures) were happening during that time.
@@domantlen6231 99w TDP is way too much heat for this form factor why do intel do it? because they have too, it's the only way they can get the performance to rival ryzen - that doesn't make it a good idea if you need this much power just get a ryzen, they are much more efficient it will produce reliably without thermally risking other components
@@elduderino7767 Companies like HP stick with intel not because performance but because vPro and some other CPU features which AMD has not polished enough or doesn't have at all. Some software just require Intel cpus or are optimized for it. And switching to any other CPU vendor is a big deal for company who serves the hardware for all those banks, officess, institutions. For them compatibility is more important than performance i think. I remember more than once where included instructions (like AES-NI or VT-d) gave Intel advantage in business uses even though AMD offered cheaper and more powerfull CPUs. That's why AMD is really doing well in gaming industry.
@@elduderino7767 Sadly i don't see any terminals which use Ryzen. Only miniPCs (mostly from China) which in my country are little more expensive, less flexible (soldered CPU) and less "stable" (their Bioses and other things are less polished or patched than terminals)
I have an older one and retrofitted an extra nic by means of an m2 i210 on it. Serves as nat router for my ftth deployed by kubernetes. So much overkill for home environment, but very cool if i may day so myself. These things are awesome
I have a similar setup for my homelab, got 4 1L units running proxmox together. Not as much RAM or cores as this example but I used older, dirt cheap units to cut costs on my setup, and I have the option to upgrade to better 1L units in the future as needed/available. The one nice thing about using a high priced/newer unit like this example is you can actually get additional ethernet PCIe cards to work. I had an issue on my units where the PCIe slot was softlocked in the BIOS to only certain Wifi cards, and while its theoretically possible to circumvent this I just found a different workaround that fit my setup instead.
Great video ! I checked for i7’s on eBay and unfortunately they all appears to be $900+ vs the $500 that you landed. I do look forward to finding one eventually.
I have 4 of these in a stack running proxmox and ceph, cause I wanted a really cheap way to get a ceph cluster up and running for testing and learning. Not exceptionally viable with 1gb nics, but with 1gb AND 10gb I could see it being a useful cluster - and ceph allows for HA without needing a centralized SAN or other kind of storage.
Great video! Educational, entertaining and motivational! I've got to pause like every thirty seconds and check, like what did he say? Some comments had me scratching my head for a while. Thanks!
I recently put a HP Prodesk 400 G6 on my rack. That machine runs Debian 11 with docker containers. Nice small machines. But, I like the Lenovo M720q's I have, a bit more, as they have PCIe slots. In total in my rack: HP Prodesk 400 G6, i5-10500T, 2x 8GB DDR4-2666, 128GB 2,5" SATA bootSSD, 1TB WD SN770 for data. Running Debian 11 with Docker Compose. 2x Lenovo M720q, i5-8500T, 2x 16GB DDR4-2400, 1TB NVMe SSD. Running ESXi 8 native. Intel NUC7i3BNK, i3-7100U, 8 + 16GB (24GB total), 500GB NVMe SSD. Running ESXi 8 native with vCenter installed on it.
Great project idea! Always have loved this method to take slightly older models and upgrade the right parts. I can think of one other use if you wanted to develop a storage-specific node is having some external PCIe enclosure with bigger storage! For like $5K really have a high availability setup with some CI/CD services in place!
This almost makes me want to upgrade my current home server, which is still rocking an Atom N270 on an Intel D945GSEJT that I bought in 2009. And yet ... it still does everything I need it to.
I *REALLY* like this idea in terms of form factor and overall density. I could easily see myself 3d printing a 1 or 2u server rack bracket for a few of these, but I feel like this stuff is *really* expensive for what you get. for an insignificant increase in volume, you can get something like a "Dell OptiPlex 7070 SFF i7-9700 3.0GHz, 32 GB, 250GB SSD" for under $400 which is 8c/16t 4.7ghz, upgradable to 64gb ddr4 per dell (others have tested up to 128gb and ddr4 is cheap now) and most importantly has a half-height pcie slot which gives you a lot of i/o options (for example my go-to of a $40 intel x540-t2 dual port 10g ethernet adapter also from ebay) or optionally 2-4 more m.2 drives, or heck, you could install a pcie sas HBA and use it as the head for a cheap as chips external jbod sas DAE, and then an m.2 10g network adapter if you needed a storage solution. Personally, I think I may end up picking up a treo of said 7070 units for a proxmox cluster. with some 2 port m.2 adapters, keep the 32gb of ram for now (upgrade later if/when i need to when i find a cheaper ram deal), and i'll probably drop a 2tb m.2 ssd in each (maybe 2) and do ceph with erasure coding in order to spread the vm data around performantly between nodes. for HA. haven't decided yet though. the 7070 *is* certainly larger than the mini hp units presented here, but it's still pretty small sff in the great scheme of things.
i disagree on the "insignificant increase in volume", these hp 1L units are very, very small, and make even SFF units like that dell you mention look like a full size tower in comparison, like, in the space you can fit 2 of those dells, you could fit about 6 of these HP's
Thank you for an excellent review of this tiny powerhouse, Patrick. Watching you explain the possibilities was mind-blowing. I can wait to see the review of the other GTR mini PC. Thanks and take care.
Sharing a similar working upgrade config I'm using. I have a Proxmox Cluster of *three* HP EliteDesk 800 G6 Mini PC i5-10500T’s I picked up on eBay. Configured Proxmox HA between selected VMs on the nodes and Proxmox Replication keeps everything in sync every 15 mins. Failover tests have worked well with VMs moving nicely between the Nodes if I yank a network cable to simulate a failure. Upgraded each of these three with: 1 x Crucial RAM 64GB Kit (2x32GB) DDR4 3200MHz CL22 CT2K32G4SFD832A 2 x Crucial P5 Plus 1TB M.2 PCIe Gen4 NVMe SSD - Up to 6600MB/s - CT1000P5PSSD8 Each pair of Crucial SSDs is running in ZFS Mirror as in Patrick’s video. Overall very happy with this home lab setup and love having three of these for *mostly* worry free redundancy. Gives me 36 CPUs and 192MB to play with, allowing some headroom for VM failovers between the Nodes if needed. Might be time to upgrade to 10GbE in these for the fun of it.
The small 10Gbe module blew my mind.. If there was a way to pop in a couple of 3.5" drives it'd be perfect. That said let me tell you that a 10Gbe with 11W idle is a feat.. most systems you can build will NOT support PCie APM and will prevent the CPU from going down to C8.. Patrick can you confirm if this CPU is indeed able to hit C8 while idle with the 10Gbe module?😊
Wow. I was planning to order a DAS but you help me to realize that the only i would need is a pair of NVME for a similar setup. And maybe add some more SSD for the 2.5 enclosure.
Also, be careful with those Sabrent Rocket 4's. They have a reputation of failing without warning, and when they do Sabrent generally doesn't want to cover RMA unless you register within the first few days of buying them, which is really shitty.
By the way, Linux being "taxed" is not measured, like Window, by CPU but by load average. A base measure is, while running top, to press the 1 key for the number of cores, and if the first number of your load average is greater than the number of cores, then your system is starting to be taxed. In addition, your I/o wait and such are an indicator of whether or not your system is being taxed. Linux is not Windows. So that little system could have been quite fine with a busy CPU.
I would love to see the `H` variant on it, not because of the 2 extra cores but because of the 96EUs Iris Xe vs 32EUs UHD770. Transcoding on 12700H, 13700H are a dream. But this unit is ticking almost all of the boxes. ;) Keep up with the good work! =)
Fair point. Sadly that would likely raise power consumption as well if we look at the Mini PC's we have with H parts (but many will think that is worth it.)
So awesome.. Perfect and powerful little unit. I'd add more fans somehow, pop it on a laptop cooler or something. Not seeing them for less then $629 on ebay.. Most are in the $800 range base unit. That is quite expensive for my taste.. I'd would perfectly replace the old dell poweredge r700 thats running on top of my fridge. I pulled most the spinning drives and installed some ssd's and some nve's on a pci card on the poweredge. I got it down to 120-140 watts an hour. Running proxmox it mostly hosts a t-pot honeypot and kali linux which I use to test my own firewall. It also heats my kitchen in the winter.
You're not missing out unless part of the value to you was the build process. At this price you can get something similar or better with a lot less after-purchase labor.
Absolutely amazing. I always had a dream to build exactly that and on proxmox, still ended up with an AMD rig with 32 cores as I needed cores in the end, 12 cores seems decent enough but obviously is a bit of limitation, performance of those cores are also much to be desired in this particular solution. but for the specific use cases I guess that's just perfect...
I can't argue with the advantage of power consumption and performance of this little node. But at what price. You can buy a single CPU enterprise server wih a tenth of the price, and pay the difference in power consumption. Sound usually delt with placing it away from the living area. But thank you for giving us ideas about the upgradibility of this unit.
I'm patiently waiting for the new wave of Intel N100/N200/N305 motherboards & mini pcs coming out. Low power draw, it's gonna be perfect for tiny servers.
We reviewed one i3 N305 Beelink EQ12 Pro system and also had a review video with fanless quad-port 2.5GbE N100/N200 systems. We have the N305 fanless video probably going live next month
The fact that 2 SODIMMs can do the same capacity and latency as my 4 DIMMs do is crazy improvement in just a few months. This has more networking and storage in 1L than I have in a MATX system, but I guess my 7900XTX is an advantage of the size. It would be interesting to see some taller (maybe exactly 2x the height for 2L) systems like this that could take a LP-GPU and a beefier CPU cooler setup.
Exactly, i need a device with 2 ethernet ports (both 2.5gb or better), 64gb ram, LOW power cpu. Many cores. ability to have multiple m.2 nvme drives. I am considering A) a variety of these 1 liter PC's... B) Chinese motherboards with laptop cpus... and also, C) 2650L type Xeon's (also in Chinese motherboards) It's a jungle :(
Awesome find. In the past I had to use a TB3 to 10Gb on my mini hp since there were no mini nics beyond 1GbE. Those adapters were / still are expensive. 😓 I’m currently using dell optiplex 3050s since they let me use $100 LP connectx4’s 😎
Im currently using an elite desk i5 g8, with hypervisor. Thank you for the video, I did not know that you could add another nic by removing the usbs. I currently have usbs to rj 45s. will be upgrading to a 10gbe nic soon
One thing right off the bat, it doesn't matter how many cores you have if you dont have the power for them, 35w is terribly low and I doubt it can utilize more than 6 cores worth of full performance. I have a 2400ge and under 35w it caps at 2.89ghz vs its full speed of 3.2ghz at 35w. When pushed for 65w it turbos up to 3.5ghz and gives full performance. While it is old I fully expect more modern hardware to have the same limits. I only run a minecraft server but even on with just that terrain gen is still heavy and it does push the power in bursts.
You are part of the future... People are working from home and studying from home too... You can put PoP OS on one of those and create Android apps too...FREE of course..
Soo..... I started thinking, what about the Lenovo and Dell version. Then, I started thinking about what about the DVR version or another specific use case version. Still thank you, I really enjoyed this video.
Patrick you're making me want to replace my existing M-ATX homelab & NAS node with a couple of these (heck could just grab one and virtualize everything else!)
My small cluster idles at 10w each for 8500T/6500T mix for 40-50w *idle*. Could easily migrate all or most of it to a single 12700T/96GB and cut idle usage a lot.
Just imagine a HA cluster with three of that HP nodes. 🤤🤤 The only thing that I missed was a sata port to plug a 128GB 2.5” SSD just to boot proxmox and use the nvme mirror only for local storage.
I've been seeing alot of confusion about the T-suffix Intel processors. The lower TDP and Boost Clock only really matter when the system is being fully utilized as it won't change your idle powerdraw and that's what I would expect most homelabs to be doing majority of the time.
And you should also cover that you HAVE to keep them cool, and the whole case is the heat dissipater. You may have to have an external fan to keep it cool or it locks up.
If you choose a faster SSD could you use a smaller heat sink to help with cooling or maybe cut into the shell next to the SSD. I have poor airflow in office and worry about heat control.
I had a Lenovo 6geb i5 tiny pc. It was my main node and plex server. Daughter needed a pc so I had to reformat it and give it to her. Oh well life goes on but these are sweet
A note about using ZFS (in any form) for the boot drive: If you are planning on passing through PCIe devices (as an example) (or if you want to passthrough the iGPU that's on the 12700T), you might struggle with updating grub because of the ZFS mirror. (I know because I tried it before.) If you don't use the ZFS root, passing through PCIe devices in Proxmox gets a lot easier or at least more straightforward. Just something for people to be aware of when they are installing Proxmox.
@@WOWIMEXCITED If you follow the instructions on how to pass through a GPU in Proxmox (at the time, I think that I was testing with Proxmox 7.3-3), there is a step where you have to update the GRUB bootloader so that it would be able to enable the IOMMU groupings correctly, and also disable some other stuff as well. When you issue the command for update grub, it will execute it, but then when you reboot the system for those changes to go into effect, it will fail to do so. I originally tested this with my main Proxmox server which has four 1 TB drives, originally in a raidz2 array, to try and get this to work and it failed. So, what I ended up doing is just building a more conventional RAID6 array for the boot drive, re-installed Proxmox on that, and it's been working fine, per the instructions (for GPU passthrough) ever since. Just be aware of that. In regards to the second part of your question about whether other bootloaders will work or not -- that I don't know. I didn't try it, and especially not with (or for) GPU passthrough. Thanks.
Very nice, it's a dream for me however to own one. Running 1ghz laptop from 2012 as ubuntu server and another laptop motherboard as NAS, and tv box for vpn server lol
Some folks find amazing deals on these types of nodes in yard sales/ dumpsters/ craigslist and so forth. They often get sold cheaply when they are off-lease. Keep up hope!
I think the Lenovo or the HP are the two best right now. Just get something with an 8th gen Core or newer as the performance went way up when Intel started to compete with AMD. Plus for Win 11 HCL.
just pick this most reviewed post of this channel, with the hope to draw a bit more attention, for this information: hey guys, if you plan to buy a Dell Optiplex micro, please avoid models using Foxconn blowers. i think Foxconn has fxxked up with the RPM curve vs temperature. in short, the blowers on my 5080 is set to min. about 1800 RPM, no matter the CPU is only about at 33 Celsius (room temperature at about 22), instead of at around 1100 RPM, in their previous models (7070 and 7060). yes i have all 3 of each of these models, and all have i5 CPU. 7070 or 7060 is not using Foxconn blower. although from 1100 to 1800, there does not seem much difference. this does make a humming noise and you will definitely notice when in a quiet room with it. just avoid it.
I bought old Server Moard on Ebay 3 years old 2x 10G Ethernet and 4x 1G was only $244 for the Board + CPU XEON Silver + RAM 128G DDR4. No case no Power Supply, just Core components.
I am running 3x of the SER5 Beelinks that have the 5500u Ryzen in it. Running kubernetes on them. Swapped the ram to 64gb, put in a 1TB 980Pro and a 4tb SSD. those little things are fantastic for what they are. It’s still just a single NIC which is fine for home use, but it sure would be nice having dual nics, especially if you have external LAN storage like iscsi or nfs.
Nice setup. A few of thoughts/questions: (1) It might be nicer to go with AMD machines if you can find them since they are all performance cores right now rather than P+E of intel 12th gen and beyond. I just don't think there are many enteprise SFF machines like that. (2) What's your recommended DDR5 ram speed right now that still is makes the most sense between 5600MT/s and 7000MT/s? (3) one last intel bummer is the weak igpu vs. AMD.
Correct on the iGPU, but Dell does not have OptiPlex Micro 1L at this point. So the Intel ones are the most current up to 13th Gen Core. On the DDR5 with these, 4800 is what this class of machine prefers to run at.
Hi one thing that I have noticed for similar unit (G2 800 Tiny with i5 6500T) about the power consumption without any load. On Windows it was about 6W but on Linux Ubuntu it was about 11W. Maybe here there is similar difference when using Windows? Anyway this is a great box. I'm also using it as a home gateway / server, but this newer one is much more powerful from what I can see (2 NVMEs, better CPU, not sure if ETH is upgradable also on the older unit). Of course the price tag is different and I don't really need more power, but it could be my next box to be used for such things.. Thank you for a great review. BTW. previously I had 70W idling i5 4670K, 32G RAM, ~12TB ZFS5 ubuntu gateway, but the G2 800 Tiny with i5 6500T 16G RAM with 2 additional DELL USB ETH 1Gbe sticks, with different storage configuration (no ZFS5 currently, but only SSDs, I have to make a automated backup or move to Proxmox or something similar - IDK yet), this simple small box is just so good for what I'm using and it's gonna pay off for about 1,5 year of electricity here in Poland.
Thanks for the video, as usual an inexhaustible source of ideas. Can I ask you a question? In light of all your experiments so far, would you use an HP Elite unit 600 or 800 for your home proxmox? Is it just a matter of money or have you found better performance in the 600 G9? Thanks
Honestly, i want this for a firewall+router+steamcache box Sure i i'd be fine with dual 1G and a pentium for just router/firewall, but with that i7, 32+GB of RAM, 10G LAN, and dual NVMe this would make an AMAZING caching box. I bet that with the USB-C you could connect a hot spot to it to get 300Mbps down for off grid pop-up LAN parties where everyone brings a 7840H based mini PC and everything is solar powered :P I cannot wait for the 128GB SODIMMs I've got an ITX board that has only 2 DDR5 SODIMMs and 96GB is a little bit limiting. But mainly so that i can get a total of 128GB like my old system, or even 192GB of system RAM without paying flagship 256GB prices.
$500 USD can get you a 32c/64t EPYC cpu, motherboard, and quiet air cooler if you go open bench. Comes with a tonne more expansion. Also fully supports VMware ESXi.
The reason (at least for me) why you should use these small factors is HA and TDP.. EPYC has how much? 155W? When you buy 3 Tiny/Micro PCs you will be under 100W with Hight Availability. I assume sooner or later will be HA the one of the required things for our smart (or stupid) homes :-)
Guys, if you want proper capacity for virtualisation, old server hardware is the way to go. V3 Xeons go for £25 with DDR3 RAM to support it at the same low prices, and they're still perfectly capable machines. Servers have a supported lifetime of 5 years across the board, but most happily work for double or triple that time, and because they're designed for power and stability, you will find plenty of reliable and replacable components (most can be replaced with the server running, aka hot-swappable).
@@ServeTheHomeVideo don't see a need for speed pun intended for a faster ssd than that is not a ps5, correct me if I'm wrong but even with the 10Gbe nic the stock SSD would saturate the network bandwidth, coming from my current Nas setup with HDDs at 120MBs 3500MBs seem already wicked Fast, i think i would prioritize size over speed in a build like the video
Got recommendations for a 4-or-more NIC low-end box to use as an in-home firewall, for e.g. a general network (hard to get on, but lots of access), guest network (easy to get on, some access), IoT network (easy to put stuff on, very limited access), and of course the WAN port... all for, say, under $100? Or, at least, under $200? I don't care as much about the speed factors. Like, even if I put some 10G on the general network to get between a desktop and a file server or whatever, I don't care if it has 10G to the Internet, so, I don't see any reason why the firewall should need it. And I've been watching some of your reviews on devices with 4+ NICs, so, I know you have some coverage, but I feel like they're all more expensive??? If I missed something, please point me at it!
Hmmm... looks like still about $300... was hoping for less than that. Have you explored things like the dfrobot "Raspberry Pi Compute Module 4 IoT Router Carrier Board Mini"? -- if I could get a 4-NIC version of that, I think it'd be perfect!
Neat find on that Hasvio switch, odd it doesn't say what poe spec that outputs on aliexpress though. I figure at least af/at, but hoping it'll do the full bt 60/90 as well (but not holding my breath). Did you get any more detail on what poe spec it comes with from the vendor?
No - but since it arrived I can tell you it is a bit funky. 25.5W per port on the Fluke on ports 2-8 IIRC. One port was not PoE. The others were only PoE+.
@@ServeTheHomeVideo Lame, that's what I suspected since they didn't bother to say, probably only AT at 25/30W. Thanks for the update in any regard, looking forward to what you find out as I already have one device that wants 60W BT I had to get an injector for.
Not being negative. I am unsure about the actual usefulness or something that costs 550 and ends up costing nearly 200% more in upgrades for something that is just a proxmox box. You could do the same with something cheaper.
Now intel nuc9 is a more budget choice if you plan for more nvme slots. I build one half year ago with truenas. Btw, the two tb3 ports can provide 20gbps p2p net.
I find it hilarious that you characterize a 3GB/sec SSD as "slow" - when not all that long ago we had to mirror drives together just to saturate Gig Ethernet!
I remember those days. The first article on the STH main site was RAID'ing 10K/15K HDDs. I mean, SSDs have been around for well over a decade at this point so they are a pretty mature technology.
ESXi issues with hp g9 800 mini elite Native nic which is i219-LM has issues with esxi 7 and 8. Link keeps breaking when activity increases 2.5gbe nic is fine on esxi8 and esxi7 with community net driver
Since folks are asking a lot about costs. The idea was just to give folks an idea of what is possible and there is a lot of variability here.
- ~$500-$515 for the base we used, but there are options for less than that.
- Maybe $25-30 for an extra 16GB DIMM. 64GB is like $150-200, 96GB $279
- $29 for the 2.5GbE NIC. $129 for the 10Gbase-T NIC,
- SSDs from $50 to $400 each depending on speed/ capacity.
I think all-in on the top-end config we would have spent around $1.5K
Original Model #?
Taxes excluded of course. The price could easily go up to 2.2k.
Thanks, was wondering the extent of the cost as spec'd.
What the original Model # -- I'm see price at $700 and up
@@someonegreat6931 DEFINITELY keep your eye on auctions, not just BIN's, esp ones that end at crazy hours, like when people are asleep. I got an HP Elite Desk mini for $140 when similarly configged ones were selling for 250ish. Seems like a lot of people can't be bothered with auctions anymore.
Thank you so much. Just built my homelab based on this setup. Affordable, great performance, small footprint, low power usage. Couldn't be more pleased.
Super! Great to hear.
Unfortunately in Brazil it is impossible to achieve a setup like that in an affordable way or even to find the parts like the hp flex nic. I like the video btw. Thank you.
This video was definitely a step up in quality from a lot of the round-ups, or quick looks. Focusing on a single unit of hardware, going into what it has, upgrade paths, and a sample OS configuration/use case is far more useful than just a pile of different mini PCs all mixed together. I think you'll build a higher quality, longer lasting body of work by focusing on these "hardware journeys", and then after you get three or four put together maybe do a year in review, etc.
By having a continuum of builds we can all appreciate how hardware evolves and opens up new configuration possibilities, etc. Plus then you have an encyclopedic body of work to further harvest for a "let's review all the cool stuff added over the past three gen!" type videos. Anyway that's just my $0.02 - everyone knocked it out of the park with this one, bravo!
Great input. I think the next one we are going to try this with is the Intel Xeon W-3400/ W-2400 series. We have several different systems, but I think that is going to be a "here is the motherboard, and here are 3-4 different options ranging from relatively less expensive to very fast, very expensive". We will see how that one goes in 2 weeks or so.
I will just point out that this costs like 3-4x what we usually spend on a Project TMM review and takes probably 3x as long to produce because of waiting for parts and then trying different setups.
@@ServeTheHomeVideo I like the idea of incorporating HEDT and more Enterprise stuff into HomeLabs. Speaking personally, my background is gamer and PC builder turned professional programmer for the past 13 years. A *lot* of the content I'm interested in involves beefing up my knowledge about enterprise gear. I got tired of having 1GbE for over a decade in my Home, so I started looking at more DIY upgrade paths and found you that way.
I would wager there's an entire generation of DIY gamers out there who just don't know what they don't know about Enterprise gear that works in the Home, and I feel like this is probably a goldmine market for yourself and others like Level1Techs who can bring that knowledge to us. I'd love to have 25GbE+ in my home for example, and just be done with upgrading that for another 10+ years. But, how do I go about it? What enterprise gear translates to the home Windows Desktop environment, and which cards run too hot/loud without a rackmount or just don't have drivers? You can see the opportunity here, and it's clear to me I have a *lot* to learn.
Last week you promised and now you delivered! Thanks !
I'm looking to setting up my first virtualization homelab and this is the perfect product for me !
Glad we could help!
That 10GbE what a find. I love seeing this stuff and the potential it has.
10G makes this finally truly viable for things like CEPH. That's what I've been waiting for.
@@FlexibleToast literally exactly what I was thinking when he started installing Proxmox.
@@FlexibleToast Do you consider using CEPH in a home setup? Or am I missing something?
I've managed CEPH storage for some years, and for me you need at least 5 CEPH storage nodes for it to make sense, preferably more. The less storage nodes you have, the bigger part you need to not use. For 3 copies and 5 storage nodes, you can't go above 60% if you want redundancy if a node dies.
I got three cephmons, but probably could get away with one in home setup and just use the data on storage nodes to rebuild the cephmon if it dies.
We've switched to dual 25gb nics (bonded) some years ago as standard.
@@burre42 yes, I use CEPH in my home setup. Proxmox uses CEPH for its hyperconverging. Proxmox takes away all of the complication of setting up CEPH and it has been super reliable. As for the storage space, of course if you're running only three nodes and you want 3 copies you can only use as much space as is in your smallest node. For a homelab I've never run into that being an issue.
Used ceph in 3 node production environment with erasure encoding. But on dual 100gbkt ethernet
Did something similar at a lower cost. Took my Lenovo M715 with an old Ryzen 2400GE and swapped the Wi-Fi card for a 2.5GbE. Has a 4TB SSD, a 1TB boot nvme, and 16 GB ram. All in for under $350
I was just about to post the wifi-to-2.5Gbit option, then I read your post. Exactly what I would do. Who wants Wifi in our Homelab...lol. Wifi is for our wives/GF's.
@@joeyjojojr.shabadoo915 I was thinking about the 10GbE using the M.2 slot but it would be wasted on the M715. It's always nice to see these videos so I know what I can put in my home lab 3-5 years from now 🤣
@@absolutrichiek I am in that same 3yr+ time frame for some of this hardware...lol. I just upgraded my rack to a 10Gbe backbone (and Wifi6 for the wife) with mixed 2.5Gbe and still 1Gbe client devices elsewhere on my home network.... so I am a bit behind the times.
How do you buy even get 4TB SSD, 1TB nvme and 16GB ram for under $350?
@@MrQuay03 $50 1tb Samsung 970evoplus, $150 team group 4tb SSD, $30 16gb ram kit. It's all about the sales
To all those cannot afford $4k pc labs,, to STH, thank you. I taught myself about PC since 9, I laarned a lot . So can you. W e may not asked for help, but we c an still learn. This video is a prime example, he could ie and make you buy ov erpriced BS he does not. God bles you sir. Saved my pockets and other s.
I appreciate your continued enthusiasm in delivering your presentation. It is evident that you are passionate about the topic and your energy has made the presentation engaging and enjoyable. Your enthusiasm is contagious, and it has certainly made a positive impact on the audience. Keep up the excellent work!
Powerful, sips little power, and at a decent price. What a combination! I would definitely consider buying one of these with all the upgrades already done. Specifically the 10GbE version. I would have a lot of fun setting this up! 🤤👍
sips little power? the cpus TDP is 99w!
imho this is a terrible setup, tbf i don't think the reviewer is serious with this build, just trying to max it out
but if you were to put consistent load on this thing it'll fail quick, the combined heat of the cpu and 2x nvmes will kill the thing is short order
and these cpus in a tiny pc makes no sense, a ryzen u series like a 5800u makes much more sense
@@elduderino7767 While i agree about nvmes - indeed they probably add a lot of heat and may be a gamechanger here. But I don't agree about the part about cpus choice. I mean they are designed for such appliances. If HP chose to sell those with i7 T-series (probably for heavier loads) then i bet it was after some extensive research about temps. Besides in 7:56 you see bottom note that this setup had 45 DAYS uptime - i bet Patrick would report here if anything bad (like extreme temperatures) were happening during that time.
@@domantlen6231 99w TDP is way too much heat for this form factor
why do intel do it? because they have too, it's the only way they can get the performance to rival ryzen - that doesn't make it a good idea
if you need this much power just get a ryzen, they are much more efficient it will produce reliably without thermally risking other components
@@elduderino7767 Companies like HP stick with intel not because performance but because vPro and some other CPU features which AMD has not polished enough or doesn't have at all. Some software just require Intel cpus or are optimized for it. And switching to any other CPU vendor is a big deal for company who serves the hardware for all those banks, officess, institutions. For them compatibility is more important than performance i think. I remember more than once where included instructions (like AES-NI or VT-d) gave Intel advantage in business uses even though AMD offered cheaper and more powerfull CPUs. That's why AMD is really doing well in gaming industry.
@@elduderino7767 Sadly i don't see any terminals which use Ryzen. Only miniPCs (mostly from China) which in my country are little more expensive, less flexible (soldered CPU) and less "stable" (their Bioses and other things are less polished or patched than terminals)
I had g0 as a server for years! this thing just works - probably the only good thing HP ever made!
This might be the most viable solution for portable openshift lab i've seen so far. I mean 96GB of RAM is indeed game changer here.
I have an older one and retrofitted an extra nic by means of an m2 i210 on it. Serves as nat router for my ftth deployed by kubernetes. So much overkill for home environment, but very cool if i may day so myself. These things are awesome
Very nice!
I have a similar setup for my homelab, got 4 1L units running proxmox together. Not as much RAM or cores as this example but I used older, dirt cheap units to cut costs on my setup, and I have the option to upgrade to better 1L units in the future as needed/available. The one nice thing about using a high priced/newer unit like this example is you can actually get additional ethernet PCIe cards to work. I had an issue on my units where the PCIe slot was softlocked in the BIOS to only certain Wifi cards, and while its theoretically possible to circumvent this I just found a different workaround that fit my setup instead.
Great video ! I checked for i7’s on eBay and unfortunately they all appears to be $900+ vs the $500 that you landed. I do look forward to finding one eventually.
Yea, they were in the $500-550 used range for about a day after the article/ video came out.
I have 4 of these in a stack running proxmox and ceph, cause I wanted a really cheap way to get a ceph cluster up and running for testing and learning.
Not exceptionally viable with 1gb nics, but with 1gb AND 10gb I could see it being a useful cluster - and ceph allows for HA without needing a centralized SAN or other kind of storage.
Proxmox VE has Ceph integrated with a UI as well so that is the use case I was thinking about when we first started these :-)
Great video! Educational, entertaining and motivational!
I've got to pause like every thirty seconds and check, like what did he say?
Some comments had me scratching my head for a while.
Thanks!
Upvote for the Proxmox info! Thank you very much! Just keeps getting better.
Thanks much. PVE since 2014/2015 here.
I recently put a HP Prodesk 400 G6 on my rack. That machine runs Debian 11 with docker containers. Nice small machines.
But, I like the Lenovo M720q's I have, a bit more, as they have PCIe slots.
In total in my rack:
HP Prodesk 400 G6, i5-10500T, 2x 8GB DDR4-2666, 128GB 2,5" SATA bootSSD, 1TB WD SN770 for data. Running Debian 11 with Docker Compose.
2x Lenovo M720q, i5-8500T, 2x 16GB DDR4-2400, 1TB NVMe SSD. Running ESXi 8 native.
Intel NUC7i3BNK, i3-7100U, 8 + 16GB (24GB total), 500GB NVMe SSD. Running ESXi 8 native with vCenter installed on it.
Great project idea! Always have loved this method to take slightly older models and upgrade the right parts. I can think of one other use if you wanted to develop a storage-specific node is having some external PCIe enclosure with bigger storage! For like $5K really have a high availability setup with some CI/CD services in place!
This almost makes me want to upgrade my current home server, which is still rocking an Atom N270 on an Intel D945GSEJT that I bought in 2009. And yet ... it still does everything I need it to.
I *REALLY* like this idea in terms of form factor and overall density. I could easily see myself 3d printing a 1 or 2u server rack bracket for a few of these, but I feel like this stuff is *really* expensive for what you get. for an insignificant increase in volume, you can get something like a "Dell OptiPlex 7070 SFF i7-9700 3.0GHz, 32 GB, 250GB SSD" for under $400 which is 8c/16t 4.7ghz, upgradable to 64gb ddr4 per dell (others have tested up to 128gb and ddr4 is cheap now) and most importantly has a half-height pcie slot which gives you a lot of i/o options (for example my go-to of a $40 intel x540-t2 dual port 10g ethernet adapter also from ebay) or optionally 2-4 more m.2 drives, or heck, you could install a pcie sas HBA and use it as the head for a cheap as chips external jbod sas DAE, and then an m.2 10g network adapter if you needed a storage solution.
Personally, I think I may end up picking up a treo of said 7070 units for a proxmox cluster. with some 2 port m.2 adapters, keep the 32gb of ram for now (upgrade later if/when i need to when i find a cheaper ram deal), and i'll probably drop a 2tb m.2 ssd in each (maybe 2) and do ceph with erasure coding in order to spread the vm data around performantly between nodes. for HA. haven't decided yet though. the 7070 *is* certainly larger than the mini hp units presented here, but it's still pretty small sff in the great scheme of things.
i disagree on the "insignificant increase in volume", these hp 1L units are very, very small, and make even SFF units like that dell you mention look like a full size tower in comparison, like, in the space you can fit 2 of those dells, you could fit about 6 of these HP's
Absolutely crazy review for all of them, my RESPECT to STH team!
Wow this is really end-game homelab hardware, thanks so much for sharing
Thank you for your support of the STH channel and projects like these!
I'm a huge fan of those tiny form factor systems for home labs. I use them stock most times... Might have to look into some upgrades. :)
I think the stock method (or just adding a DIMM and/or SSD) is pretty common. We still wanted to show what else you can do with them.
Thank you for an excellent review of this tiny powerhouse, Patrick. Watching you explain the possibilities was mind-blowing. I can wait to see the review of the other GTR mini PC. Thanks and take care.
Sharing a similar working upgrade config I'm using. I have a Proxmox Cluster of *three* HP EliteDesk 800 G6 Mini PC i5-10500T’s I picked up on eBay. Configured Proxmox HA between selected VMs on the nodes and Proxmox Replication keeps everything in sync every 15 mins. Failover tests have worked well with VMs moving nicely between the Nodes if I yank a network cable to simulate a failure. Upgraded each of these three with:
1 x Crucial RAM 64GB Kit (2x32GB) DDR4 3200MHz CL22 CT2K32G4SFD832A
2 x Crucial P5 Plus 1TB M.2 PCIe Gen4 NVMe SSD - Up to 6600MB/s - CT1000P5PSSD8
Each pair of Crucial SSDs is running in ZFS Mirror as in Patrick’s video. Overall very happy with this home lab setup and love having three of these for *mostly* worry free redundancy. Gives me 36 CPUs and 192MB to play with, allowing some headroom for VM failovers between the Nodes if needed.
Might be time to upgrade to 10GbE in these for the fun of it.
The small 10Gbe module blew my mind.. If there was a way to pop in a couple of 3.5" drives it'd be perfect. That said let me tell you that a 10Gbe with 11W idle is a feat.. most systems you can build will NOT support PCie APM and will prevent the CPU from going down to C8..
Patrick can you confirm if this CPU is indeed able to hit C8 while idle with the 10Gbe module?😊
If anesthetics aren't a huge concern, you can prob run externals, but I'd go with a couple of 2.5 SSD's personally.
@@YerBrwnDogAteMyRabitI don't think anesthetics will solve this issue. 😂😂
@@st3althyone haha yep.. We could Frankenstein something but no 12V will make it harder.
Maybe via Thunderbolt but I did not see it amongst the specs
@@Airbag888 Thank you, but I think you missed the joke with the misspelling of the word “aesthetic.”🤣🤣
@@st3althyone oh I did not but too much work replying to everything haha XD
Wow. I was planning to order a DAS but you help me to realize that the only i would need is a pair of NVME for a similar setup. And maybe add some more SSD for the 2.5 enclosure.
Also, be careful with those Sabrent Rocket 4's. They have a reputation of failing without warning, and when they do Sabrent generally doesn't want to cover RMA unless you register within the first few days of buying them, which is really shitty.
maybe this is the reason that they don't sell them in Poland (EU / guaranty requirements)
@@ytxzwthanks for this hint. I would like to buy it to my Home lab. It seems I have to buy it abroad.
oh theyre absolute turds only ones ive ever seen were dead
What do you suggest instead.
I'll upgrade my NAS to only SSDs soon! They have become so cheap lately! Very impressive how much computer you can fit inside a tiny case like this.
By the way, Linux being "taxed" is not measured, like Window, by CPU but by load average. A base measure is, while running top, to press the 1 key for the number of cores, and if the first number of your load average is greater than the number of cores, then your system is starting to be taxed. In addition, your I/o wait and such are an indicator of whether or not your system is being taxed. Linux is not Windows.
So that little system could have been quite fine with a busy CPU.
great project by the way! great idea, you definitely stand out from other youtubers :)
Thank you
I would love to see the `H` variant on it, not because of the 2 extra cores but because of the 96EUs Iris Xe vs 32EUs UHD770.
Transcoding on 12700H, 13700H are a dream.
But this unit is ticking almost all of the boxes. ;)
Keep up with the good work! =)
Fair point. Sadly that would likely raise power consumption as well if we look at the Mini PC's we have with H parts (but many will think that is worth it.)
Shouldn't the Quicksync codecs be identical between the H and U series?
@@rightwingsafetysquad9872 doesnt the quicksync hardware “horsepower” matches the igpu execution units and generation?
So awesome.. Perfect and powerful little unit. I'd add more fans somehow, pop it on a laptop cooler or something. Not seeing them for less then $629 on ebay.. Most are in the $800 range base unit. That is quite expensive for my taste.. I'd would perfectly replace the old dell poweredge r700 thats running on top of my fridge. I pulled most the spinning drives and installed some ssd's and some nve's on a pci card on the poweredge. I got it down to 120-140 watts an hour. Running proxmox it mostly hosts a t-pot honeypot and kali linux which I use to test my own firewall. It also heats my kitchen in the winter.
I guess its too late to find these for a reasonable price now. Everyone snatched them up.
There are new AMD mini PCs that can compete with this at the exact same price.
@@deaddevil7model number?
I prefer the Lenovo 1L. Some of them have an expansion slot that allows for you to install a low profile card.
Too expensive
You're not missing out unless part of the value to you was the build process. At this price you can get something similar or better with a lot less after-purchase labor.
Absolutely amazing. I always had a dream to build exactly that and on proxmox, still ended up with an AMD rig with 32 cores as I needed cores in the end, 12 cores seems decent enough but obviously is a bit of limitation, performance of those cores are also much to be desired in this particular solution. but for the specific use cases I guess that's just perfect...
i mean i'd think for a decent amount of home AND business uses those would be more than enough, especially with an all-in-one full fat docker VM. :/
I can't argue with the advantage of power consumption and performance of this little node. But at what price. You can buy a single CPU enterprise server wih a tenth of the price, and pay the difference in power consumption. Sound usually delt with placing it away from the living area. But thank you for giving us ideas about the upgradibility of this unit.
I'm patiently waiting for the new wave of Intel N100/N200/N305 motherboards & mini pcs coming out.
Low power draw, it's gonna be perfect for tiny servers.
We reviewed one i3 N305 Beelink EQ12 Pro system and also had a review video with fanless quad-port 2.5GbE N100/N200 systems. We have the N305 fanless video probably going live next month
The fact that 2 SODIMMs can do the same capacity and latency as my 4 DIMMs do is crazy improvement in just a few months. This has more networking and storage in 1L than I have in a MATX system, but I guess my 7900XTX is an advantage of the size. It would be interesting to see some taller (maybe exactly 2x the height for 2L) systems like this that could take a LP-GPU and a beefier CPU cooler setup.
Are you thinking like this Lenovo? ruclips.net/video/E_an5heI1BU/видео.html
@@ServeTheHomeVideo Well yes I am!
Would be nice to have a list of all the TinyMiniMicros that can have an addon ethernet port. Let it be 1, 2.5, or 10Gb.
Exactly, i need a device with 2 ethernet ports (both 2.5gb or better), 64gb ram, LOW power cpu. Many cores. ability to have multiple m.2 nvme drives. I am considering
A) a variety of these 1 liter PC's...
B) Chinese motherboards with laptop cpus... and also,
C) 2650L type Xeon's (also in Chinese motherboards)
It's a jungle :(
ROG ALLY in the background there--- whatcha playing? :) Loving the videos!!
Thanks! Sadly nothing yet. Too much work plus getting married two weeks ago
Could you please create a guide on building SSD NAS? Also, when can we expect 10tb SSDs?
Already a handful of 8 TB units, but, not cheap!
Awesome find. In the past I had to use a TB3 to 10Gb on my mini hp since there were no mini nics beyond 1GbE. Those adapters were / still are expensive. 😓
I’m currently using dell optiplex 3050s since they let me use $100 LP connectx4’s 😎
Im currently using an elite desk i5 g8, with hypervisor. Thank you for the video, I did not know that you could add another nic by removing the usbs. I currently have usbs to rj 45s. will be upgrading to a 10gbe nic soon
One thing right off the bat, it doesn't matter how many cores you have if you dont have the power for them, 35w is terribly low and I doubt it can utilize more than 6 cores worth of full performance. I have a 2400ge and under 35w it caps at 2.89ghz vs its full speed of 3.2ghz at 35w. When pushed for 65w it turbos up to 3.5ghz and gives full performance. While it is old I fully expect more modern hardware to have the same limits. I only run a minecraft server but even on with just that terrain gen is still heavy and it does push the power in bursts.
I have an old Lenovo of the small ones, and they kick ass! I love this idea and will try it soon!
Nice little machine! I do hope there is some good airflow to cool that tiny 10GE NIC.
You are part of the future...
People are working from home and studying from home too...
You can put PoP OS on one of those and create Android apps too...FREE of course..
Soo.....
I started thinking, what about the Lenovo and Dell version.
Then, I started thinking about what about the DVR version or another specific use case version.
Still thank you, I really enjoyed this video.
Patrick you're making me want to replace my existing M-ATX homelab & NAS node with a couple of these (heck could just grab one and virtualize everything else!)
That is the point of the Project TinyMiniMicro series :-)
I know! I've been watching for ages and haven't really wanted to upgrade but now I do! Great content as always!
My small cluster idles at 10w each for 8500T/6500T mix for 40-50w *idle*.
Could easily migrate all or most of it to a single 12700T/96GB and cut idle usage a lot.
For NAS we need slots for 3.5 HDDs. I don’t see how these tiny minis will support them.
Just imagine a HA cluster with three of that HP nodes. 🤤🤤 The only thing that I missed was a sata port to plug a 128GB 2.5” SSD just to boot proxmox and use the nvme mirror only for local storage.
The SATA data + power internal port remains, but we did not have the 2.5" + fan assembly and I get a bit worried about airflow. It is possible though.
I've been seeing alot of confusion about the T-suffix Intel processors.
The lower TDP and Boost Clock only really matter when the system is being fully utilized as it won't change your idle powerdraw and that's what I would expect most homelabs to be doing majority of the time.
I want this with 100% passive cooling!
Amazing for this size!
I think so too!
Used enterprise SSDs are a good option for home labs, especially for proxmox ZFS as their write life (even used) typically far exceeds consumer stuff.
We are going to have a big piece on that in a few weeks
And you should also cover that you HAVE to keep them cool, and the whole case is the heat dissipater. You may have to have an external fan to keep it cool or it locks up.
Yes @ServeTheHome, But could you build several more of those and combine them into you own personal Home Supercomputer?
Yes. Cluster!
@@ServeTheHomeVideo A Closet Super Computer with cooling Ventilation where the hot air get sent out/over while cool air flows in/under.
If you choose a faster SSD could you use a smaller heat sink to help with cooling or maybe cut into the shell next to the SSD. I have poor airflow in office and worry about heat control.
I had a Lenovo 6geb i5 tiny pc. It was my main node and plex server. Daughter needed a pc so I had to reformat it and give it to her. Oh well life goes on but these are sweet
A note about using ZFS (in any form) for the boot drive:
If you are planning on passing through PCIe devices (as an example) (or if you want to passthrough the iGPU that's on the 12700T), you might struggle with updating grub because of the ZFS mirror.
(I know because I tried it before.)
If you don't use the ZFS root, passing through PCIe devices in Proxmox gets a lot easier or at least more straightforward.
Just something for people to be aware of when they are installing Proxmox.
Why is a ZFS mirror problematic with grub? Could you use another bootloader?
@@WOWIMEXCITED
If you follow the instructions on how to pass through a GPU in Proxmox (at the time, I think that I was testing with Proxmox 7.3-3), there is a step where you have to update the GRUB bootloader so that it would be able to enable the IOMMU groupings correctly, and also disable some other stuff as well.
When you issue the command for update grub, it will execute it, but then when you reboot the system for those changes to go into effect, it will fail to do so.
I originally tested this with my main Proxmox server which has four 1 TB drives, originally in a raidz2 array, to try and get this to work and it failed.
So, what I ended up doing is just building a more conventional RAID6 array for the boot drive, re-installed Proxmox on that, and it's been working fine, per the instructions (for GPU passthrough) ever since.
Just be aware of that.
In regards to the second part of your question about whether other bootloaders will work or not -- that I don't know.
I didn't try it, and especially not with (or for) GPU passthrough.
Thanks.
Very nice, it's a dream for me however to own one.
Running 1ghz laptop from 2012 as ubuntu server and another laptop motherboard as NAS, and tv box for vpn server lol
Some folks find amazing deals on these types of nodes in yard sales/ dumpsters/ craigslist and so forth. They often get sold cheaply when they are off-lease. Keep up hope!
@@ServeTheHomeVideo i will. hunting for good deals been my life for the past months
Great video been thinking about using one of these tiny pc's. Really like the Lenovo Think Centres
I think the Lenovo or the HP are the two best right now. Just get something with an 8th gen Core or newer as the performance went way up when Intel started to compete with AMD. Plus for Win 11 HCL.
@@ServeTheHomeVideo wonderful 👍 thanks for the advice will do.
This is a fantastic video. Bravo on the amazing content as usual!
Glad you enjoyed it!
This thing looks awesome.. Looking to update my fractal tower/Supermicro Xeon server from 7 years service.
Absolutely bonkers and I LOVE videos like this.
Un gran servidor en una cosa tiny. Maravilloso!
just pick this most reviewed post of this channel, with the hope to draw a bit more attention, for this information: hey guys, if you plan to buy a Dell Optiplex micro, please avoid models using Foxconn blowers. i think Foxconn has fxxked up with the RPM curve vs temperature. in short, the blowers on my 5080 is set to min. about 1800 RPM, no matter the CPU is only about at 33 Celsius (room temperature at about 22), instead of at around 1100 RPM, in their previous models (7070 and 7060). yes i have all 3 of each of these models, and all have i5 CPU. 7070 or 7060 is not using Foxconn blower. although from 1100 to 1800, there does not seem much difference. this does make a humming noise and you will definitely notice when in a quiet room with it. just avoid it.
I bought old Server Moard on Ebay 3 years old 2x 10G Ethernet and 4x 1G was only $244 for the Board + CPU XEON Silver + RAM 128G DDR4. No case no Power Supply, just Core components.
I am running 3x of the SER5 Beelinks that have the 5500u Ryzen in it. Running kubernetes on them. Swapped the ram to 64gb, put in a 1TB 980Pro and a 4tb SSD. those little things are fantastic for what they are.
It’s still just a single NIC which is fine for home use, but it sure would be nice having dual nics, especially if you have external LAN storage like iscsi or nfs.
I run beelink too, 2x 5800h 64GB 2TB each and 1 5600h 64GB 2TB... Proxmox Cluster + Docker + Kubernetes.
Nice setup. A few of thoughts/questions: (1) It might be nicer to go with AMD machines if you can find them since they are all performance cores right now rather than P+E of intel 12th gen and beyond. I just don't think there are many enteprise SFF machines like that. (2) What's your recommended DDR5 ram speed right now that still is makes the most sense between 5600MT/s and 7000MT/s? (3) one last intel bummer is the weak igpu vs. AMD.
Correct on the iGPU, but Dell does not have OptiPlex Micro 1L at this point. So the Intel ones are the most current up to 13th Gen Core. On the DDR5 with these, 4800 is what this class of machine prefers to run at.
@@ServeTheHomeVideo Ya maybe we'll see better options here with meteor lake but it'll be a while until they are affordable used
Hi one thing that I have noticed for similar unit (G2 800 Tiny with i5 6500T) about the power consumption without any load. On Windows it was about 6W but on Linux Ubuntu it was about 11W. Maybe here there is similar difference when using Windows? Anyway this is a great box. I'm also using it as a home gateway / server, but this newer one is much more powerful from what I can see (2 NVMEs, better CPU, not sure if ETH is upgradable also on the older unit). Of course the price tag is different and I don't really need more power, but it could be my next box to be used for such things.. Thank you for a great review.
BTW. previously I had 70W idling i5 4670K, 32G RAM, ~12TB ZFS5 ubuntu gateway, but the G2 800 Tiny with i5 6500T 16G RAM with 2 additional DELL USB ETH 1Gbe sticks, with different storage configuration (no ZFS5 currently, but only SSDs, I have to make a automated backup or move to Proxmox or something similar - IDK yet), this simple small box is just so good for what I'm using and it's gonna pay off for about 1,5 year of electricity here in Poland.
96GB RAM? whattttt I never heard of this. Incredible.
Enjoy
Thanks for the video, as usual an inexhaustible source of ideas.
Can I ask you a question? In light of all your experiments so far, would you use an HP Elite unit 600 or 800 for your home proxmox?
Is it just a matter of money or have you found better performance in the 600 G9?
Thanks
If money were no issue, usually I would go with the 800.
@@ServeTheHomeVideoAnd after all these reviews would you go straight with an Hpe elite to make your 'heart' proxmox?
Thank you Patrick
@@GLPRAGMARight now I like the HP then in a close second the Lenovo's.
Honestly, i want this for a firewall+router+steamcache box Sure i i'd be fine with dual 1G and a pentium for just router/firewall, but with that i7, 32+GB of RAM, 10G LAN, and dual NVMe this would make an AMAZING caching box.
I bet that with the USB-C you could connect a hot spot to it to get 300Mbps down for off grid pop-up LAN parties where everyone brings a 7840H based mini PC and everything is solar powered :P
I cannot wait for the 128GB SODIMMs I've got an ITX board that has only 2 DDR5 SODIMMs and 96GB is a little bit limiting.
But mainly so that i can get a total of 128GB like my old system, or even 192GB of system RAM without paying flagship 256GB prices.
Why not choosing the Lenovo M80q for this project?
HP has an easier path to 10GbE and you can actually add a GPU alongside an add-in NIC with HP. You are right that Lenovos are good too
This is my Disneyland ❤
Ha! This is a comment I never expected
$500 USD can get you a 32c/64t EPYC cpu, motherboard, and quiet air cooler if you go open bench. Comes with a tonne more expansion. Also fully supports VMware ESXi.
The reason (at least for me) why you should use these small factors is HA and TDP.. EPYC has how much? 155W? When you buy 3 Tiny/Micro PCs you will be under 100W with Hight Availability. I assume sooner or later will be HA the one of the required things for our smart (or stupid) homes :-)
Guys, if you want proper capacity for virtualisation, old server hardware is the way to go. V3 Xeons go for £25 with DDR3 RAM to support it at the same low prices, and they're still perfectly capable machines. Servers have a supported lifetime of 5 years across the board, but most happily work for double or triple that time, and because they're designed for power and stability, you will find plenty of reliable and replacable components (most can be replaced with the server running, aka hot-swappable).
Great video. Unfortunately, the price of these minis has skyrocketed. I can't seem to find one less than $800 now.
is the image wrong or are you saying the stock SSD with 3567MBs / 3058MBs is not "Fast"? great video
That is very slow for a PCIe Gen4 NVMe SSD and that is not the full write endurance test or a sync write test.
@@ServeTheHomeVideo don't see a need for speed pun intended for a faster ssd than that is not a ps5, correct me if I'm wrong but even with the 10Gbe nic the stock SSD would saturate the network bandwidth, coming from my current Nas setup with HDDs at 120MBs 3500MBs seem already wicked Fast, i think i would prioritize size over speed in a build like the video
>Because Bill sent them
and yet, he didn't send over their 8TB sticks? 🤣
I did not ask. They literally get them in, put them in boxes, then ship them out sold the next day. 8TB is so popular.
You can use VLAN and still achieve an out of band / isolated / management network over a single NIC.
I am a big fan of the SFF units for my home lab.
Nice.Linux Mint with Audio over HDMI? How?
But that ddr5 is ECC RAM? if I want to use with truenas scale for example and zfs files.
Nope, not with a 12700T. You'd need Xeon or AMD for that.
Got recommendations for a 4-or-more NIC low-end box to use as an in-home firewall, for e.g. a general network (hard to get on, but lots of access), guest network (easy to get on, some access), IoT network (easy to put stuff on, very limited access), and of course the WAN port... all for, say, under $100? Or, at least, under $200? I don't care as much about the speed factors. Like, even if I put some 10G on the general network to get between a desktop and a file server or whatever, I don't care if it has 10G to the Internet, so, I don't see any reason why the firewall should need it. And I've been watching some of your reviews on devices with 4+ NICs, so, I know you have some coverage, but I feel like they're all more expensive??? If I missed something, please point me at it!
Check out the N5105 fanless firewalls we reviewed
@@ServeTheHomeVideo Will do; thanks!
Hmmm... looks like still about $300... was hoping for less than that.
Have you explored things like the dfrobot "Raspberry Pi Compute Module 4 IoT Router Carrier Board Mini"? -- if I could get a 4-NIC version of that, I think it'd be perfect!
Great walkthrough.. subbed.. thanks!
Almost black Friday good time to start building, would you change any of these parts if you were to build it today?
I want this…. This is what I was looking for since so long…..
Extra like for using proxmox VE 8
Had to re-record that this week because the video was done... Then Proxmox VE 8 was released
Neat find on that Hasvio switch, odd it doesn't say what poe spec that outputs on aliexpress though. I figure at least af/at, but hoping it'll do the full bt 60/90 as well (but not holding my breath).
Did you get any more detail on what poe spec it comes with from the vendor?
No - but since it arrived I can tell you it is a bit funky. 25.5W per port on the Fluke on ports 2-8 IIRC. One port was not PoE. The others were only PoE+.
@@ServeTheHomeVideo Lame, that's what I suspected since they didn't bother to say, probably only AT at 25/30W. Thanks for the update in any regard, looking forward to what you find out as I already have one device that wants 60W BT I had to get an injector for.
Not being negative. I am unsure about the actual usefulness or something that costs 550 and ends up costing nearly 200% more in upgrades for something that is just a proxmox box. You could do the same with something cheaper.
Out-of-Band-ish... ;-) These videos are awesome!
Ha! Amazing what stays in the edit.
Now intel nuc9 is a more budget choice if you plan for more nvme slots. I build one half year ago with truenas. Btw, the two tb3 ports can provide 20gbps p2p net.
woohoo Comment#1 View# 10 - this is as usual awesome content !! Patrick , Rohit and the team at STH are doing a great job !!
Thank you!
I find it hilarious that you characterize a 3GB/sec SSD as "slow" - when not all that long ago we had to mirror drives together just to saturate Gig Ethernet!
I remember those days. The first article on the STH main site was RAID'ing 10K/15K HDDs. I mean, SSDs have been around for well over a decade at this point so they are a pretty mature technology.
Off topic... Steve how often you hit them gym? You are turning into a unit
Any news when 48 GB DDR5 ECC UDIMMs are going to come to the market?
Nice content!
I have a 800 G9, will it be able to recognise the 96GB RAM?
ESXi issues with hp g9 800 mini elite
Native nic which is i219-LM has issues with esxi 7 and 8. Link keeps breaking when activity increases
2.5gbe nic is fine on esxi8 and esxi7 with community net driver