Another important thing to look for aside from the battery is also the heatsink pins on RAID controllers and NICs. As the plastic pins age, they get fragile. See our "Most Spectacular Hardware Failure in the STH/ DemoEval Lab 2016" for what happened when one failed on a 10Gbase-T NIC.
😄 Ain't that the truth? At one point, I was saying, "What the hell is that?" It was an ancient laptop computer sitting on an ancient InkJet (or dot matrix?) printer. 😄
Thank you for calling this out. For some reason we are still being told that 1Gbps is enough for most people even while 25Gbps is not really a problem. Heck you can even go 100Gbps for $700 if you get a MikroTik CRS504-4XQ-IN. It infuriates me that we are being fed the lie that even 2.5gbps would increase costs too much and that 10Gbps is stupid. I really think that if faster networking were standard we would find ways to really use it. Personally I would love to start seeing solutions that implement SMPTE ST 2110 which grabs an uncompressed video feed from a GPUs frame buffer and sends it directly over an IP network. This isn't crazy. HDMI 2.1 is only like 48Gbps and allows 8k video over that. If you have enough bandwidth on your network you could have a central server in your home and access it from anywhere.
The problem is TrueNAS and probably samba. I've got a Xeon E5-1660V4 (Broadwell) in an HP Z440 with 8 PCIE 3.0 nvme drives in a stripe connected with a Mellanox ConnectX3 using both ports. I get line rate (20Gbit/s 2GByte/s) no problem using Windows Server 2012 R2 or 2022. Without RDMA, CPU usage is high on a few cores but there's still headroom. With RDMA, there's tons of headroom left. TrueNAS always required more work to get somewhat decent speeds out of it and is always slower than Windows for SMB. For me, Windows Server just made way more sense than TrueNAS.
I guess that many people watching this video, have any idea on how awesome those times from the background image were. Those times, you were considered a nerd - sometimes even in the negative sense - when you were into computers. And 2+ decades later, its a complete different world where computers are just as common as a tv was. I honestly liked the old days way more than today. Pioneering the internet with first BBS, then gopher and then Netscape was just such a freaking awesome time.
Cries in 2007 dual Xeon x5650s. They were trash when I got them for $12 a piece. But, still runs Jellyfin and everything else at the same time. One the CPU fans quit a few years ago, and it's running in a garage in Louisiana, so it's invincible, despite the SAS controller dying a year or two ago.
That Asrock rack board is what I have running my truenas setup. Works fine with a 10gig card and my LSI HBA, 5600 Ryzen cpu and 64gigs of ram. Love it.
Heh, I have almost the same setup, tho instead of TrueNAS I'm using Proxmox and a Ryzen 5700X :D Those Asrock Rack machines are really awesome for homelab stuff
The E5-2699 v3 chips that can drop into that board are anywhere from $40-60 U.S. And they have 45Mb L3 cache (Ring BUS). Those are 18 core, 36 threads each, 40 PCIe lanes 68Gbps and there is also a BIOS mod that unlocks Turbo on all cores, if you can keep them cool. On a budget, that's more than respectable, esp. for home use. There's even an OEM variant (2696) that boosts to 3.8GHz, but you'd definitely want water cooling at that point. But for those who aren't paying for those sweet AMD chips, you can achieve excellent performance per dollar and put some of this e-waste to good use and have a nice home NAS. 36 cores, 72 threads @ 3.8GHz, 90Mb of L3 cache? Total respect for the LGA 2011-3 platform.
Used to have a pair of R320's running the homelab and had to put some low power 2.3ghz E5-2450L's in them to keep the fans at a reasonable level. Then moved to a Ryzen 7700x, to say that I appreciate clock speed in server builds now is a bit of an understatement.
I am sure Wendell is saying words and doing something in the foreground but those SE's, Classic and even the Packard Bell in the background have ALL of my attention.
lol i have my steam library on iscsi with truenas. Works out really well actually. You would be surprised how much can loading times improve, especially if most of the game files are cached on ARC.
I too host my steam library on *really* crappy hardware via iSCSI on 1Gb networking. It works. Is it fast? No. Do I care? No. I have patience, I can wait for a game to load.
3:00 I have an extremely similar RAID controller with what looks to be the same battery pack, but I do believe that this battery pack should in fact use a capacitor, or several, which can of course also swell, but are less damaging.
well I would say that one persons trash is another persons upgrade, I installed a 2011v3 CPU as an upgrade a month ago about, or, sidegrade more like, but a sort of repurposing. High end stuff is fun but at a point one needs to consider what is worth what in terms of disposable funds. The side grade was to go from 4 cores to 14, 3.7Ghz to 2, popped in 64gb more ram for 96gb total, quad NVMe card, quad GBE and it went from basically e-waste to something usable.
Retro-tech guys just have to worry about batteries leaking and corroding the board, necessitating fiddly repairs to damaged traces. Modern tech guys have to worry about batteries bursting into flame and destroying any chance of repair.
Can confirm the low power swapping these configs to SSD. Just built an archive server with a 26 bay r730xd lff. It included dual 1100w PSUs with dual 16 core e5 2697a v4’s and 512GB ram. 26x SATA SSDs and a Samsung PCIe nvme ssd. Idles at 180w
I recall some 15k RPM drives dying after a decade of 24/7 operation, and we did some backof envelope maths on how far theoutside of those platters would have ravelled over their lifetime. The results were very impressive.
back in the heyday of 15k drives. You could almost cook your lunch on some of the larger arrays. I seems to recall drives sitting at around 55 C, just idling.
@@danieloberhofer9035 so roughly calculations.. 2.5 inch hdd is 6.35 cm, platter will be about 6 cm diameter. Let's say circumference of outer track is then 18 cm, meaning 45 m/sec. This equates 14 million km per decade. At the nearest, Earth and Pluto are 4280 million km apart. So indeed, 14 and 4280 million km are quite different. Though 14 million km is still LOL.
It was interesting to see the bottlenecks pointed out, but I would have loved to see the difference in performance and price compared if you had stepped up to Xeon E5v4 DDR4 with 12Gbps SAS and 25GbE. I think that's currently where the best value for money is. If I was spending money today buying something that I wanted to last for several years, I can't see justifying anything lower at this point. E5-2697v4 18 core CPUs can be had for $50. So much room for activities!
here's why my E5-v4 is NOT trash though - it's what I can afford. Sometimes awesome has to take a backseat to attainable unfortunately. And yea, I know it isn't power efficient, I know. But it IS attainable.
at the end, it's e-wast for enterprises who wants there stuff to just work fast. But for home use? sure, if you may bet a bargain or even for free, go for it. ^^
I’m running a couple E5-v2’s with no complaints. Sure, it’s not the fastest thing in the world, but it’s got all the PCIe lanes I need for accelerators, extra NICs, etc.
I think that the wording should be tuned in the video to be less unintentionally derogatory of this older hardware, yes. I'm glad there is a video like this though at all. I'm running a 2009 Dell Precision still as a desktop and even this would be a great upgrade.
I just upgraded to that generation, but I get them from my customers and I don't have to pay power at work ^^ But yea, if I would need really a lot of power for work, I would by new.
Isn't Gluster discontinued? Personally I use 40G in my home/homelab network, because the cards were just as cheap as the 25G cards, but the 40G switches are cheaper (there's currently a Cisco -jet plane- 64 port qsfp+ 40G switch on my ebay watchlist for 200€)
I’d feel a lot better calling Broadwell trash if ECC support on modern consumer platforms was better than Intel flipping the bird or AMD giving motherboard vendors any leeway for implementing features. At least Skylake is dirt cheap now with marginally better performance.
I've always felt that something is off with TrueNas and fast storage. I know you've touched on it and Linus showed the same thing on his all flash server. I just built an Epyc TrueNas Scale server and have a set of 4 PCIE 4.0 NVMe drives and struggle getting them to saturate a 25Gbe connection to my desktop or Threadripper workstation. I even have a 12 disk SSD SATA6 array that also seems to be pretty lackluster with a modern LSI PCIE 4.0 HBA.
When I saw the title, I thought it was going to be about retro-tech with a 386 or 486 processor and a bunch of old SCSI hard drives offering 25GB of capacity and running an early version of Linux. Still listened to the whole thing, though.
Broadwell was trash. Haswell was gold. I still use x99 for my Blue Iris surveillance PC. Asus Strix x99 w/ 128GB of ECC, uses all EIGHT dimm slots, w/ a 2699! Even use my old Corsair v1 H110. Still kickin. Also Have a window10 machine for my plex server/media/files. Gigabyte z590 w/ 10850K, HW Raid 5 w/ 8, 3TB WD reds. And I use a single Seagate 16TB Enterprise HDD to back that up. Better than nothing. I also have a Unraid server, made of an old FX6300 system I had to back all that up on a different machine for fun.
Yeah, 2 node clusters are ... fun. Both nodes trying to shoot the other and succeeding - that type of fun. Not the kind of thing for high-reliability home networks - only the kind of thing to encounter at work 😛 (when being paid).
100% agree on the SAS 12G front, in fact I also would be very picky of which HBA/Controller it is. Some have really bad JBOD modes, which is what I suspect is in that cisco. Although, 2011-v3 can still be pushed to saturate 25G albeit with a different config to these machines. like Wendell said, get a proper RAID card that does a good job of JBOD, or just get a decent HBA that can push SAS 12G, as I have achieved that in even my LAB and so have others. Heck if you really know what you are doing with RoCE and NVME you can push 100GB although that is not using ZFS. In saying that 2011-v3 is a dinosaur already and like Wendell I agree it's trash and would not put anything on it today as it would be too slow. I'm already considering 1st gen scalable and epyc as trash although my current flash nas is on a 1st gen scalable system and can't wait to get off of it. There something to say for cost though, so maybe 2011-v3 still has a place. I might not want to run ZFS on it in a traditional way though or even run it all if I want to run it.
So .... I got a minor problem here. We should never call a solution "trash" if it meets the system requirements. In fact, I would call some solutions "trash" which excessively exceed system requirements. For example, the place I am employed is *way* over-built in terms of compute and storage. Storage especially. We have some older all-flash SANs which can hit something like 40k IOPS if you push them. Those money-making, mission-critical SANs on average are riding....1k IOPS. And to replace those SANs, what have we done? We went with a wiz-bang fancy HCI vendor that you've certainly heard of ... but the system requirements of the workloads haven't changed. Just because something is older does not inherently make it trash. Yes, it's probably not a good idea for the average business to operate super old hardware. But if it meets the requirements of the business, it isn't trash.
I like the low cut on the section in the 20 minute range although it could be a bit lower I think you've also tried this in the past but no one's really noticed
Really looking forward to when NVME disk shelves for M.2 drives or something similar to the Icy dock ToughArmor MB873MP-B are becoming reasonably priced.
2696v4s are probably the best way to go if you reaaaaaaaaaaaaaalllly want to use 2011 since they all core turbo to 3.4 3.6 all the time but draw 200 plus watts
Actually, those AM4 boards make for solid servers, I have 2 in my homelab / dev env, 0 down time in 3 years, runs with buffered ECC, both with perfectly performant zpools. Running Ubuntu + MAAS to provision and LXD for VMs. Though I will say I have one with an intel 10Gb nic, and one with an onboard 10G nic, and the onboard one has some heating issues.
Please dive into TrueNAS tunning. There are soo many things that should be default but not. And performance is soo much behind for the hardware it running on, especial SMB without RDMA and lack of NVMeof.
What is sad is his greenscreen "i hope it is" used to look similar to where I was working. but we also had old motherboards screwed hanging on the walls. Does a NAS need to be above 10Gig? Is Wendle super rich? because this is way better than any Homelab NAS I've heard of lmfao. I cant imagine Sas 6 is going to much better than 2.5 SATA SSD's which would be so much cheaper with way higher capacitys.
25 gig is pedestrian? We're just starting to get most of our new deployments on 10/25G switches, two years ago averything was 10g only and we still saw servers usiing quad 4GbE to serve VM traffic. I give you the broadwell, but those are what is actually attainable for the casual homelaber. My last "lab" box uses an Aliexpress 2011-3 board with quad-channel DDR4, 4x SATA SSDs (512GB for 20€ each) and a 2.5GbE NIC. The whole build was probably less than 350€ - what is one of those Kyoxia drives worth? I'll try and get a 10 gig NIC in there instead of the GPU, but 25 is firmly in the realm of "i could build a whole nother box for that money!".
Those looking for a cheap 25GB server, rather use a descend pcie4 system, then use 4x4TB nvme's (or more) on a pcie slot with bifurcation and then create a raid volume on truenas with a 25GB network card. That would be a nice test.
a single 990 pro or something should be able to do ~7500 mb/sec reads and dual 25gb can only do ~5000 mb/sec. You should only really need one 4.0 X4 slot.
Didn't Windows have something called "One Drive" or "I-Drive" built in that configures your drive array as a single drive? Anyone used that or have anything to say about that, good or bad?
Love videos like this. Currently running a R710 re-purposed as a NAS with HBA to let TrueNAS run ZFS. Though would love something newer. Cost of hardware in Australia can be a little prohibitive though. But isn't this what homelabs are all about?
2650v3 is Haswell not Broadwell. I have an R730xd with dual 2699v4 running Truenas Scale and it can do ~3900MB/sec in crystaldiskmark over a dual 25gig intel Nic and SMB. I have 20 Samsung PM883 sata SSDs and 1TB of 2400mhz DDR4. I had a lot of problems with Microsoft Defender and trying to run fast SMB shares. Can you verify in Iperf3 that you are getting 24+gb/sec on each of the links?
all i have is 2011v2 and v3 and my current work loads it hardly breaks a sweat but, i have always planned to setup more but, time slows me down. i can run gaming vms remotley though with nivida grid k1 and k2
I love the vids. The higher end stuff - while I get the reasons - I don't know anyone who is/or can afford 200 for SSDs here, and 25GB nics there, and new servers. I get why the harsh view of the old kit is taken, but for many of us, stringing together sellotape / string and just hanging in there. Getting budget for IT has never been harder. Watching the stupid levels of Graph and throttling issues with 365 make me think the Xeon V2 junk real world has a place. :/
For a direct attached cluster you would have to setup routing with ospf. It will work without it but if a link fails it will cause issues. Also please take a look at linstor / drbd its a interesting way to cluster host storage. It creates raid1 level of redundancy. So it does allow you to make a cluster with 2 hosts
What practical home use do old enterprise servers meet that an old dell t3610 cannot? My home stuff is all on a t3610. It uses very little power, has good memory channels and is at 3.7 GHz with a Xeon 1620. I've used it as a game server exclusively (was originally slated for home SQL server stuff but never got around to it). 7 days to die, Palworld, Minecraft, Ark and FoundryVTT have all run on it quite well. You can still get them ready to go for $100 on ebay... Is there anything better bang for buck out there right now? I'm actually in the market somewhat (ain't we all) and would like something with good single core as well as multi core. Anything that passes $300 in hardware would be beyond my budget.
Your T3610 is the same platform as a same-age server. The main benefit of the server option is the BMC, letting you access the system remotely even when the OS isn't working. The server also supports RDIMMs which some workstations might not. The main benefit of the workstation option is lower noise. You're also more likely to have sleep mode implemented on the workstation.
ive tried using without a switch, but windows, constantly, wants to use the slower nic. ive watched many youtube videos, changed the order of nics, and many other things. still don't work
I'm wondering if my old Ryzen 7 1700 on a B450 chipset will be good enough to run a NAS tbh. I'm trying to talk myself into upgrading my PC so I can turn my current one into my first 'real' server
Me, watching this knowing dang well I’m just going to stick to my old Ryzen 7 1700 home server and only going to upgrade when the 5950X drops to e-waste prices 😂 Honestly, my 1700 is probably better than broadwell anyway core for core since it’s clocking higher than most of the broadwell xeons.
This is awesome, I just got one of those exact Cisco C220s (mines a M4S) from cleaning out our IT closet. 64GB RAM and slapped two 2640 V4s in it for $10. Is it good for anything? Probably not, but I’m having a ton of fun messing with it.
@Level1Techs which Xeon D do you recommend? Even gen 10 stuff is "not cheap", getting X11SDV or even X12SDV mobos from Supermicro are super hard to find. @13min
What about Xpenology? Can you make a video running it as an nvr with surveillance station? Also is there any chatter as to when intel might replace the Xeon max 9480?
I never get why people have top look that bad. Just press l, t, m, Z, enter, 1, W(in order, case sensitive). That turns configures color rendering, shows multiple core usage, renders CPU and memory usage graphs, then saves the configuration. Literately just need to type this once. You can learn more about keys in top by pressing h.
@@shanent5793 Not sure why my last comment was deleted, but my answer was procps-ng, as installed by default on Debian, Ubuntu, probably more distros. Config file is .config/procps/toprc.
If I'm having to go with a 10 year old server because I blew all the money on those Kioxia's, it's not the server's fault that it's slow. 😂 Curious as to what SSD's would be a more reasonable, affordable pairing for a system like that?
I am so confused- is a chunk of this video missing or something? At first we're looking at a Cisco server then suddenly we're talking about a Supermicro chassis out of nowhere... Then I swear you bring it up like you were talking about that chassis the whole time lol. I assume just an editing snafu? Or did I black out mid video... I'll rewatch.
So what you're saying is... I should just buy a pair of 25 Gbps NICs to use in my 10Gbps SFP ports. 🤔 Then I'll be ready down the road when 25 Gbps switch prices come down. 😁
Another important thing to look for aside from the battery is also the heatsink pins on RAID controllers and NICs. As the plastic pins age, they get fragile. See our "Most Spectacular Hardware Failure in the STH/ DemoEval Lab 2016" for what happened when one failed on a 10Gbase-T NIC.
Wendell can you do a ZFS e-waste edition? ebay hw, metadatadevices, nvme incoming and spinning rust offloading?
19:00 Excellent auditory illustration, I love the fiber ASMR
I click the video as it goes live, and Jeff was already here 6 days ago :O Has Red Shirt Jeff been dabbling with time travel again?
Our Patreon and Floatplane subscribes get early access to all of our vids
@@daltonchaney1504 I was inside the fiber optic travelling backwards at 1.2x the speed of light :)
@@Level2Jeff I declare you to be Schrodinger's Geoff.
@@jameslawrence8734 Oh shoot, posted from the alt lol
I missed a good portion of this video becasue i was distracted by the hardware on the shelves behind Wendell.
😄 Ain't that the truth? At one point, I was saying, "What the hell is that?" It was an ancient laptop computer sitting on an ancient InkJet (or dot matrix?) printer. 😄
The autofocus thought so too.
Immediately noticed the compaq and the older macs
@GizmoFromPizmo I can still hear that printer go between 2 floors and 2 doors. 😂
Just gonna have to watch it again
Thank you for calling this out. For some reason we are still being told that 1Gbps is enough for most people even while 25Gbps is not really a problem. Heck you can even go 100Gbps for $700 if you get a MikroTik CRS504-4XQ-IN. It infuriates me that we are being fed the lie that even 2.5gbps would increase costs too much and that 10Gbps is stupid. I really think that if faster networking were standard we would find ways to really use it. Personally I would love to start seeing solutions that implement SMPTE ST 2110 which grabs an uncompressed video feed from a GPUs frame buffer and sends it directly over an IP network. This isn't crazy. HDMI 2.1 is only like 48Gbps and allows 8k video over that. If you have enough bandwidth on your network you could have a central server in your home and access it from anywhere.
The problem is TrueNAS and probably samba. I've got a Xeon E5-1660V4 (Broadwell) in an HP Z440 with 8 PCIE 3.0 nvme drives in a stripe connected with a Mellanox ConnectX3 using both ports. I get line rate (20Gbit/s 2GByte/s) no problem using Windows Server 2012 R2 or 2022. Without RDMA, CPU usage is high on a few cores but there's still headroom. With RDMA, there's tons of headroom left. TrueNAS always required more work to get somewhat decent speeds out of it and is always slower than Windows for SMB. For me, Windows Server just made way more sense than TrueNAS.
I guess that many people watching this video, have any idea on how awesome those times from the background image were.
Those times, you were considered a nerd - sometimes even in the negative sense - when you were into computers.
And 2+ decades later, its a complete different world where computers are just as common as a tv was.
I honestly liked the old days way more than today. Pioneering the internet with first BBS, then gopher and then Netscape was just such a freaking awesome time.
Cries in 2007 dual Xeon x5650s. They were trash when I got them for $12 a piece. But, still runs Jellyfin and everything else at the same time.
One the CPU fans quit a few years ago, and it's running in a garage in Louisiana, so it's invincible, despite the SAS controller dying a year or two ago.
Dual X5675s in my cold storage. I turn it off after backing up files to it.
Power consumption is an issue.
That Asrock rack board is what I have running my truenas setup. Works fine with a 10gig card and my LSI HBA, 5600 Ryzen cpu and 64gigs of ram. Love it.
Heh, I have almost the same setup, tho instead of TrueNAS I'm using Proxmox and a Ryzen 5700X :D Those Asrock Rack machines are really awesome for homelab stuff
The E5-2699 v3 chips that can drop into that board are anywhere from $40-60 U.S. And they have 45Mb L3 cache (Ring BUS).
Those are 18 core, 36 threads each, 40 PCIe lanes 68Gbps and there is also a BIOS mod that unlocks Turbo on all cores, if you can keep them cool.
On a budget, that's more than respectable, esp. for home use.
There's even an OEM variant (2696) that boosts to 3.8GHz, but you'd definitely want water cooling at that point.
But for those who aren't paying for those sweet AMD chips, you can achieve excellent performance per dollar and put some of this e-waste to good use and have a nice home NAS.
36 cores, 72 threads @ 3.8GHz, 90Mb of L3 cache? Total respect for the LGA 2011-3 platform.
Used to have a pair of R320's running the homelab and had to put some low power 2.3ghz E5-2450L's in them to keep the fans at a reasonable level. Then moved to a Ryzen 7700x, to say that I appreciate clock speed in server builds now is a bit of an understatement.
And it’s not only just clockspeed. The instructions per clock improvements are massive.
I am sure Wendell is saying words and doing something in the foreground but those SE's, Classic and even the Packard Bell in the background have ALL of my attention.
Want my 6400/200? 🤣
I like these ewaste videos! I have always enjoyed talking something that would have been in the scrap heap and giving it a new life.
lol i have my steam library on iscsi with truenas. Works out really well actually. You would be surprised how much can loading times improve, especially if most of the game files are cached on ARC.
I too host my steam library on *really* crappy hardware via iSCSI on 1Gb networking. It works. Is it fast? No. Do I care? No. I have patience, I can wait for a game to load.
Same here but without the ARC caching.. still pretty fast tho
Can't wait another 10 to 20 years for those 3000 dollar kioxia drives to drop to 1500 dollars 😊
You meant another 30’years 😅
3:00 I have an extremely similar RAID controller with what looks to be the same battery pack, but I do believe that this battery pack should in fact use a capacitor, or several, which can of course also swell, but are less damaging.
This is the exact kinda mad scientist content we need!
well I would say that one persons trash is another persons upgrade, I installed a 2011v3 CPU as an upgrade a month ago about, or, sidegrade more like, but a sort of repurposing. High end stuff is fun but at a point one needs to consider what is worth what in terms of disposable funds. The side grade was to go from 4 cores to 14, 3.7Ghz to 2, popped in 64gb more ram for 96gb total, quad NVMe card, quad GBE and it went from basically e-waste to something usable.
Love the old Macs on the backshelf - first computer I bought - no hard drives!!!
Cries in Xeon V2...
Phenomenal focus! I thoroughly enjoyed it!
Damn, my primary machine is a E5-2667-V4 and its relatively new to me :c
That's a good CPU. It has pretty good cache.
"this thing CANNOT WAIT to catch fire" ❤🔥
Retro-tech guys just have to worry about batteries leaking and corroding the board, necessitating fiddly repairs to damaged traces. Modern tech guys have to worry about batteries bursting into flame and destroying any chance of repair.
Can confirm the low power swapping these configs to SSD. Just built an archive server with a 26 bay r730xd lff. It included dual 1100w PSUs with dual 16 core e5 2697a v4’s and 512GB ram. 26x SATA SSDs and a Samsung PCIe nvme ssd. Idles at 180w
Sweet Packard Bell in the intro, PB was my 1st PC $1900 w/o monitor Pentium 60, 8 MB RAM, 420 MB hard drive, 2x CD-ROM drive with windows 3.1
As someone who's been using flash at ~1GB/s over 10G to host my steam library on a TrueNAS VM, it is indeed quite nice.
I recall some 15k RPM drives dying after a decade of 24/7 operation, and we did some backof envelope maths on how far theoutside of those platters would have ravelled over their lifetime. The results were very impressive.
back in the heyday of 15k drives. You could almost cook your lunch on some of the larger arrays. I seems to recall drives sitting at around 55 C, just idling.
did it go to pluto and back 25 times?
@@redtailsCan't imagine that. Pluto is really far away. Like, *really* far!
@@danieloberhofer9035 so roughly calculations.. 2.5 inch hdd is 6.35 cm, platter will be about 6 cm diameter. Let's say circumference of outer track is then 18 cm, meaning 45 m/sec. This equates 14 million km per decade. At the nearest, Earth and Pluto are 4280 million km apart. So indeed, 14 and 4280 million km are quite different. Though 14 million km is still LOL.
@@redtails Sounds about right. Just shows how (quoting from Douglas Adams) mind-bogglingly big space is...
It was interesting to see the bottlenecks pointed out, but I would have loved to see the difference in performance and price compared if you had stepped up to Xeon E5v4 DDR4 with 12Gbps SAS and 25GbE. I think that's currently where the best value for money is. If I was spending money today buying something that I wanted to last for several years, I can't see justifying anything lower at this point. E5-2697v4 18 core CPUs can be had for $50. So much room for activities!
here's why my E5-v4 is NOT trash though - it's what I can afford. Sometimes awesome has to take a backseat to attainable unfortunately. And yea, I know it isn't power efficient, I know. But it IS attainable.
at the end, it's e-wast for enterprises who wants there stuff to just work fast. But for home use? sure, if you may bet a bargain or even for free, go for it. ^^
I’m running a couple E5-v2’s with no complaints. Sure, it’s not the fastest thing in the world, but it’s got all the PCIe lanes I need for accelerators, extra NICs, etc.
I think that the wording should be tuned in the video to be less unintentionally derogatory of this older hardware, yes. I'm glad there is a video like this though at all. I'm running a 2009 Dell Precision still as a desktop and even this would be a great upgrade.
15:15 -- Very nice surprise! Love X470 D4U and X570 D4U-2L2T Asrock boards. They have good second-hand value for a reason.
I have two of them. I want another one but can't find anything for a reasonable price. (new or used)
@@VTOLfreak Never got one, wish that I had. Long live AM4! Asrock was so smart making those boards. Really an homage to homelabbers.
I just upgraded to that generation, but I get them from my customers and I don't have to pay power at work ^^
But yea, if I would need really a lot of power for work, I would by new.
Thank you good sir!! I just picked up a pallet of those Cisco servers… Fun times
Wow. I’ve been working on computers for over 20 years and just now found the s/n label on the front lol
Looking forward to the video where you make a Beowulf cluster out of three Mac SEs and a Classic II.
Isn't Gluster discontinued?
Personally I use 40G in my home/homelab network, because the cards were just as cheap as the 25G cards, but the 40G switches are cheaper (there's currently a Cisco -jet plane- 64 port qsfp+ 40G switch on my ebay watchlist for 200€)
I’d feel a lot better calling Broadwell trash if ECC support on modern consumer platforms was better than Intel flipping the bird or AMD giving motherboard vendors any leeway for implementing features. At least Skylake is dirt cheap now with marginally better performance.
I've always felt that something is off with TrueNas and fast storage. I know you've touched on it and Linus showed the same thing on his all flash server. I just built an Epyc TrueNas Scale server and have a set of 4 PCIE 4.0 NVMe drives and struggle getting them to saturate a 25Gbe connection to my desktop or Threadripper workstation. I even have a 12 disk SSD SATA6 array that also seems to be pretty lackluster with a modern LSI PCIE 4.0 HBA.
When I saw the title, I thought it was going to be about retro-tech with a 386 or 486 processor and a bunch of old SCSI hard drives offering 25GB of capacity and running an early version of Linux.
Still listened to the whole thing, though.
Bro 25 gigs is not nearly enough storage for a NAS. U need at least 256GB.
Broadwell was trash. Haswell was gold. I still use x99 for my Blue Iris surveillance PC. Asus Strix x99 w/ 128GB of ECC, uses all EIGHT dimm slots, w/ a 2699! Even use my old Corsair v1 H110. Still kickin. Also Have a window10 machine for my plex server/media/files. Gigabyte z590 w/ 10850K, HW Raid 5 w/ 8, 3TB WD reds. And I use a single Seagate 16TB Enterprise HDD to back that up. Better than nothing. I also have a Unraid server, made of an old FX6300 system I had to back all that up on a different machine for fun.
Yeah, 2 node clusters are ... fun. Both nodes trying to shoot the other and succeeding - that type of fun. Not the kind of thing for high-reliability home networks - only the kind of thing to encounter at work 😛 (when being paid).
100% agree on the SAS 12G front, in fact I also would be very picky of which HBA/Controller it is. Some have really bad JBOD modes, which is what I suspect is in that cisco.
Although, 2011-v3 can still be pushed to saturate 25G albeit with a different config to these machines. like Wendell said, get a proper RAID card that does a good job of JBOD, or just get a decent HBA that can push SAS 12G, as I have achieved that in even my LAB and so have others. Heck if you really know what you are doing with RoCE and NVME you can push 100GB although that is not using ZFS. In saying that 2011-v3 is a dinosaur already and like Wendell I agree it's trash and would not put anything on it today as it would be too slow. I'm already considering 1st gen scalable and epyc as trash although my current flash nas is on a 1st gen scalable system and can't wait to get off of it.
There something to say for cost though, so maybe 2011-v3 still has a place. I might not want to run ZFS on it in a traditional way though or even run it all if I want to run it.
This is the same way i have my nas set up without a 25gbe switch! It's crazy how fast SMB is with RDMA!!
3:38 "sas 6 is 5-600 GB/s"
Im guessing you mean MB?
So .... I got a minor problem here. We should never call a solution "trash" if it meets the system requirements. In fact, I would call some solutions "trash" which excessively exceed system requirements. For example, the place I am employed is *way* over-built in terms of compute and storage. Storage especially. We have some older all-flash SANs which can hit something like 40k IOPS if you push them. Those money-making, mission-critical SANs on average are riding....1k IOPS.
And to replace those SANs, what have we done? We went with a wiz-bang fancy HCI vendor that you've certainly heard of ... but the system requirements of the workloads haven't changed.
Just because something is older does not inherently make it trash. Yes, it's probably not a good idea for the average business to operate super old hardware. But if it meets the requirements of the business, it isn't trash.
I like the low cut on the section in the 20 minute range although it could be a bit lower I think you've also tried this in the past but no one's really noticed
2:30 the bad focus on wendel makes him look like he is in front of a prerendered background image
Really looking forward to when NVME disk shelves for M.2 drives or something similar to the Icy dock ToughArmor MB873MP-B are becoming reasonably priced.
oh men, I am excited that I have 2.5gbs . Little jealous to your thinkering with that 25g.
Man this makes me regret not stocking up on gen3 enterprise drives back when you could scoop them up for $50-$75/TB 😅
I didn't catch it in the video, but you are using jumbo frames on your iscsi interfaces right?
Love my broadwell! Recommended upgrade ?
2696v4s are probably the best way to go if you reaaaaaaaaaaaaaalllly want to use 2011 since they all core turbo to 3.4 3.6 all the time but draw 200 plus watts
Me running a opteron x3216 server , getting more work done than some of the high-end xeons
So in next week's video we'll be attaching 100G NIC to my sewing machine cluster it'll be darn fast!
Got near same model stuffed with 15k physical SAS and all it does right now is docker webapps, syncthing and run LLAMA3 70b and Groq 1.0.
Actually, those AM4 boards make for solid servers, I have 2 in my homelab / dev env, 0 down time in 3 years, runs with buffered ECC, both with perfectly performant zpools. Running Ubuntu + MAAS to provision and LXD for VMs. Though I will say I have one with an intel 10Gb nic, and one with an onboard 10G nic, and the onboard one has some heating issues.
I spy an old antec case under the desk, still have the exact same gathering dust too.
Please dive into TrueNAS tunning. There are soo many things that should be default but not. And performance is soo much behind for the hardware it running on, especial SMB without RDMA and lack of NVMeof.
What is sad is his greenscreen "i hope it is" used to look similar to where I was working. but we also had old motherboards screwed hanging on the walls.
Does a NAS need to be above 10Gig?
Is Wendle super rich? because this is way better than any Homelab NAS I've heard of lmfao.
I cant imagine Sas 6 is going to much better than 2.5 SATA SSD's which would be so much cheaper with way higher capacitys.
Ah yes, more hardware I cannot afford and cannot use at home anyway.
But I love watching!
25 gig is pedestrian? We're just starting to get most of our new deployments on 10/25G switches, two years ago averything was 10g only and we still saw servers usiing quad 4GbE to serve VM traffic.
I give you the broadwell, but those are what is actually attainable for the casual homelaber. My last "lab" box uses an Aliexpress 2011-3 board with quad-channel DDR4, 4x SATA SSDs (512GB for 20€ each) and a 2.5GbE NIC. The whole build was probably less than 350€ - what is one of those Kyoxia drives worth?
I'll try and get a 10 gig NIC in there instead of the GPU, but 25 is firmly in the realm of "i could build a whole nother box for that money!".
Wouldn't it be much better to buy a used Zen2 EPYC server?
Those looking for a cheap 25GB server, rather use a descend pcie4 system, then use 4x4TB nvme's (or more) on a pcie slot with bifurcation and then create a raid volume on truenas with a 25GB network card.
That would be a nice test.
a single 990 pro or something should be able to do ~7500 mb/sec reads and dual 25gb can only do ~5000 mb/sec. You should only really need one 4.0 X4 slot.
@@fhpchrisyes and lose data when the drive crashes. 3 would be the least, that if you are interested in raid5
@@Labombab raid....dead
@@rezenclowd3yeah like your bmw
@@Labombab ? I recommend school.
Didn't Windows have something called "One Drive" or "I-Drive" built in that configures your drive array as a single drive?
Anyone used that or have anything to say about that, good or bad?
Love videos like this.
Currently running a R710 re-purposed as a NAS with HBA to let TrueNAS run ZFS.
Though would love something newer.
Cost of hardware in Australia can be a little prohibitive though.
But isn't this what homelabs are all about?
2650v3 is Haswell not Broadwell. I have an R730xd with dual 2699v4 running Truenas Scale and it can do ~3900MB/sec in crystaldiskmark over a dual 25gig intel Nic and SMB. I have 20 Samsung PM883 sata SSDs and 1TB of 2400mhz DDR4. I had a lot of problems with Microsoft Defender and trying to run fast SMB shares. Can you verify in Iperf3 that you are getting 24+gb/sec on each of the links?
40Gb is more common second hand. I have 10Gbe + 40Gb QSFP switches I need to sell but they aren't worth enough to be bothered.
Starts out with a cisco system, then in middle of the video, he junks the system and runs the micro system board got us all excited old system
So an NVMe SSD will work with a SAS backplane?
You could use actual iscsi HBAs, an FX8150 could saturate 40GbE in 2013
I will agree, e5-26xxV3 is trash... I have SO many negative memories from having to diagnose OOMs on RHEL6 KVM virtual machines.
all i have is 2011v2 and v3 and my current work loads it hardly breaks a sweat but, i have always planned to setup more but, time slows me down. i can run gaming vms remotley though with nivida grid k1 and k2
I've been looking for awhile where do you source those kioxia PM7 drives I have a few servers id love to upgrade
I love the vids.
The higher end stuff - while I get the reasons - I don't know anyone who is/or can afford 200 for SSDs here, and 25GB nics there, and new servers.
I get why the harsh view of the old kit is taken, but for many of us, stringing together sellotape / string and just hanging in there. Getting budget for IT has never been harder.
Watching the stupid levels of Graph and throttling issues with 365 make me think the Xeon V2 junk real world has a place. :/
At 3:20 it focuses on the tech in the shelves!
Does putting drives in ZFS RAID improve random read/write speed as well, or just the sequential read/write speed?
But now that AM5 is a server socket :)
For a direct attached cluster you would have to setup routing with ospf. It will work without it but if a link fails it will cause issues.
Also please take a look at linstor / drbd its a interesting way to cluster host storage. It creates raid1 level of redundancy. So it does allow you to make a cluster with 2 hosts
What about 40gb ? At least where i live QSFP 40gb is way cheaper
What practical home use do old enterprise servers meet that an old dell t3610 cannot?
My home stuff is all on a t3610. It uses very little power, has good memory channels and is at 3.7 GHz with a Xeon 1620. I've used it as a game server exclusively (was originally slated for home SQL server stuff but never got around to it). 7 days to die, Palworld, Minecraft, Ark and FoundryVTT have all run on it quite well.
You can still get them ready to go for $100 on ebay... Is there anything better bang for buck out there right now? I'm actually in the market somewhat (ain't we all) and would like something with good single core as well as multi core. Anything that passes $300 in hardware would be beyond my budget.
Your T3610 is the same platform as a same-age server. The main benefit of the server option is the BMC, letting you access the system remotely even when the OS isn't working. The server also supports RDIMMs which some workstations might not. The main benefit of the workstation option is lower noise. You're also more likely to have sleep mode implemented on the workstation.
Now the question is... would putting a decent 12G SAS RAID controller in an old server and using hardware RAID offer better performance than TrueNas?
ive tried using without a switch, but windows, constantly, wants to use the slower nic. ive watched many youtube videos, changed the order of nics, and many other things. still don't work
The video length is also 25:25. Nicely done.
I'm wondering if my old Ryzen 7 1700 on a B450 chipset will be good enough to run a NAS tbh. I'm trying to talk myself into upgrading my PC so I can turn my current one into my first 'real' server
Me, watching this knowing dang well I’m just going to stick to my old Ryzen 7 1700 home server and only going to upgrade when the 5950X drops to e-waste prices 😂
Honestly, my 1700 is probably better than broadwell anyway core for core since it’s clocking higher than most of the broadwell xeons.
16:15 Wendell, I want to see y'all explode that Packard Bell! They blow up real good! Oh man, I hated my Packard Bell.
This is awesome, I just got one of those exact Cisco C220s (mines a M4S) from cleaning out our IT closet. 64GB RAM and slapped two 2640 V4s in it for $10. Is it good for anything? Probably not, but I’m having a ton of fun messing with it.
We need you in the Iscsi rabbithole 😅
@Level1Techs which Xeon D do you recommend? Even gen 10 stuff is "not cheap", getting X11SDV or even X12SDV mobos from Supermicro are super hard to find. @13min
Would live to hit the Corps while upgrading, and get the routers, and resell to consumers.
What about Xpenology? Can you make a video running it as an nvr with surveillance station? Also is there any chatter as to when intel might replace the Xeon max 9480?
Haswell, Broadwell. No, I'm quite ready to bin them just yet.
I never get why people have top look that bad. Just press l, t, m, Z, enter, 1, W(in order, case sensitive). That turns configures color rendering, shows multiple core usage, renders CPU and memory usage graphs, then saves the configuration. Literately just need to type this once. You can learn more about keys in top by pressing h.
Which top has persistent settings? so I can avoid it.
@@shanent5793 top from procps-ng. Installed by default on Debian, Ubuntu etc.
Config file is .config/procps/toprc
@@shanent5793 Not sure why my last comment was deleted, but my answer was procps-ng, as installed by default on Debian, Ubuntu, probably more distros. Config file is .config/procps/toprc.
@@Maxjoker98 "Just press l, t, m, Z, enter, 1, W" 👍 'Tip top' top config. Thank you.
If I'm having to go with a 10 year old server because I blew all the money on those Kioxia's, it's not the server's fault that it's slow. 😂
Curious as to what SSD's would be a more reasonable, affordable pairing for a system like that?
Wendell, the nerdiest guy on the internet.
So how fast could this back up a a 1 TB computer drive?
5-6 minutes if you can approach line speed. With likely bottlenecks more like 16 minutes.
I am so confused- is a chunk of this video missing or something? At first we're looking at a Cisco server then suddenly we're talking about a Supermicro chassis out of nowhere... Then I swear you bring it up like you were talking about that chassis the whole time lol. I assume just an editing snafu? Or did I black out mid video... I'll rewatch.
So what you're saying is... I should just buy a pair of 25 Gbps NICs to use in my 10Gbps SFP ports. 🤔
Then I'll be ready down the road when 25 Gbps switch prices come down. 😁
~3-4k for a 25Gbps switch...not bad. Thats home worthy imo.
Not worried static discharge the hard drives