Frankly, you talking about why a particular server "lid" is better than others, showing us the tricks they used and what makes that particular machine special, is EXACTLY why I watch your videos. Keep it up!
I work professionally in a mostly HPE datacenter environment. Seeing how so many features that exist today in the C-series, Synergy and newer ProLiant servers all originated with Compaq so long ago is really fun to explore. Unironically, the memory modules are still the first to fail in almost all my servers still.
It's hard to believe that stumbling my way through hosting a phpBB forum in 2005 basically paved the way to the network admin career I'm in today. Nice trip down memory lane with the LAMP install and head scratching when they don't communicate properly. Love the content and love seeing old hardware (especially stuff that I've never heard of) get a new lease on life. Keep up the great work!
It can go both ways. Went from running several dial-up BBS boards to running websites with PHPBB and even remote hosting before AWS. But it didn't lead to a career for me. Wish it had.
Thanks for exploring all this heavy junk for us. The kid in me would have loved to play with all this stuff, but as I get older I lost some enthusiasm, but especially lost my patience for lugging these boat anchors around. The extra energy you have to set them up is just completes the trifecta of what I'm missing. I guess what I'm saying is I like this stuff but I have no energy or time to mess with it, so thanks for letting me live vicariously. Cheers
It's interesting to see how quick hardware swaps, down to the CPU level, were important design elements before modern virtualization really took hold in the server space. Nowadays no one would really care too much about being able to swap out a CPU cage fast because if something goes wrong with a physical machine, you'd just migrate the VM to a different one, but back then it could have been incredibly important to get up a particular machine as fast as possible.
I remember when one of the HP server where I work had a faulty PSU, our contractor shipped out a whole new server. I was genuinely shocked by this because I had an expectation that something like the PSU would be a relatively simple in field replacement.
HP is miserable to deal with but the fit and finish of their servers is genuinely exceptional. I bought a DL325 Gen 10 for work and was amazed by how well and easily everything went together. The lid latch, drive mounting mechanism, rails, and component modules are head and shoulders above any other vendor I've ever used.
A bummer they suck to deal with, you're not the first person to mention that. This thing is almost 20 years old so I don't know about the new ones but it's super well built.
I always buy outgoing generations so I've never had to deal with HP, but man I agree I absolutely love my HP Servers. ILO is generally reliable, and parts are SO easy to get for them
most of HP's design/manufacture tech from the early years, came from Compaq acquisition. if you've worked with compaq servers, you can spot various things in old hp servers immediately. early hp own server tryouts were pieces of crap. fit and finish is one of the few things they kept at very good levels but their customer support and feature lock (in some of the stupidest ways immaginable) is pain. in terms of reliablility, i have mixed cases which boils down to: if you don't have early problems, you won't have at all. if you do, you will always have. but i guess this is what you have to deal with, when you're into the "affordable" part of the enterprise market. meh.
I work in the ITAD industry and often come across legacy units. It's always fascinating seeing the progression and design philosophy as technology advances
LOL, I was sitting here working on my ProLiant DL385 G5 when I started watching this video, only to find you using the old RAM from it in this one! :-)
Ever since I started using HP servers in the mid 2000s, I have loved their servers. They are always easy to get into, easy to remove parts, everything is so easy to work on
Great video. I thought it was really cool to see how much redundancy was built into these systems back in the day. Nowadays all our servers are just cattle - memory can fail the server goes offline and nothing happens
Exactly what I was thinking. These days if you're still an on premise shop you're not just running one server, you might have three (or more) servers behind a load balancer. So if one goes down, everything just keeps trucking with the other two. If the load balancer goes down, the entire thing fails over to the backup cluster.
I work for a company that owns two small DCs, we are just NOW putting these into production running OmniOS and bhyve as hypervisors attached to 3 large disk shelves.
I ran a forum, hosted on a dialup connection with dynamic DNS to keep it "available" through reconnects back in the early 2000s. But I was an ASP weenie back then rather than PHP. Later I moved to Perl when I started messing about with FreeBSD more, and now Python. Fun stories from that time of my life. I used to attend a LAN gaming event at a local community centre, and got involved with running it. Later on I wrote an online booking management system to help manage the attendees and make sure we didn't go over capacity. That was in ASP of course. 😎
Nice work. I worked for a Compaq dealership in the early 2000’s I never saw one of those machines in the flesh but it’s very similar to the things I did see. I hated the smart start disk that was needed to boot Compaq/HP servers. Be thankful yours was already configured. Great content as always.
Holy heck PHP brings back memories of school. I did a GCSE in compsci and wrote our coursework project in PHP. PHP was a joy (and nightmare) to use and I do and don’t miss it. In many ways the hot reload stuff I have with the JS stuff I do now is similar but that simple F5 and it just works? Can’t beat it. No config needed!
Used to build these back in the day at Hp in Erskine. The reason the DL580 G3 gained so many of these features is because it was a consolidation product, it moved up a tier from the G2 in that it entered the quad socket arena, as it was also a replacement for the old DL740 & DL760G2 high end servers at a much lower price. For such an expensive intel based mid/high end server we built an awful lot of them as people were still unsure of the higher performing AMD based DL585 until the G2 version of that was released.
Cool machine! I'm curious if the errant memory board would work if you use another material to insulate the bowed board from the caddy. Anti-static bags are actually slightly conductive themselves - this is how they create the anti-static protection for the bag's contents. I'd try again with paper or vinyl and see if you get any different results.
This video was so entertaining and informational. IMO, you’re already making videos at the level of someone with many millions of subs, so… imo, do not change (: So cool to see what the world ran on in the mid 00s while i was in grade school.
Back in the day I was a server tech in the banking system. Deployed hundreds of these boxes. That and DL380s. Being the bank, they always had these fully populated. Can't imagine the total value of what passed through my hands. Those 580s were rock solid. And yes you are pronouncing ILO properly ✌️
Very interesting to see how x86 server manufacturers were trying to differentiate themselves as the platform became such a commodity product. Even at the time I wonder if amortizing the cost of developing a fancy redundant memory system over sales versus "just buy a second server" ever worked out... Great video as always!
@@clabretro Back then, you were a Dell shop, an HP shop (or a Compaq shop), an IBM shop or a no-name shop, mainly dependent on who got there first, based on the remote management platform you would have standardised on (or whatever the contractor would supply if you were in government at the time). This server would have been either an application server, a database server or an ESX server (though that came a little later). For these applications, having the redundancy was a life-saver (and well worth the money). I would agree with you on the price statement, but you should try and look up some enterprise software list prices from those days ;-)...
I love the designs and of HP products as i work on them, however hp can also be terrible to deal with.I personally deal with them so much i decided to buy a dl360p gen 8! Wonderful video, very well done keep it up!
30:35 Programming only from documentation, or some book is really cool. It reveals designers point of view. I can read Perl because of Nagios plugins and it can be done without pasting from stack over and over. I'm still using a crap of phpBB and made a painful transition from PHP 5.6 and ~2018 phpBB version to a newer one. It is 18 years old now. The amount of specialized knowledge over there has no cloud substitution.
Ran dl380,dl580,dl980 servers over the years, on top of the c7000 and newer synergy systems. The dl580 boxes were good boxes but the memory speed was slower to match the slower Xeon speeds. Moved one Oracle RAQ cluster from dl580 to dl380 nodes when the DB needed more oomph in the cache and memory department over core count.
RE the HP/Compaq split identity. Post merger till at least 2006, any Proliant 300, 500 or 700 series was the team out of Houston from the Compaq side. They weren't in a rush to cover up all the Compaq logos with HP ones, and I specifically remember the SmartArray firmware for a time displaying Compaq in red text briefly before flipping to HP in blue during boot. The 100 series were at times some of the pre merge HP server folks IIRC, but overall the Compaq side took over the x86 server designs in the post merger company. HP side ran Itanium, even though the Proliant team had built one Itanic box, the DL590 prior to the merger. Had fun supporting that era of servers and storage tech, DEC and Compaq's legacy in those spaces resulted in some solid designs and support organizations.
I'm telling you something: There was at least one piece of storage equipment that did exactly what you described. Now I'm being deliberately vague here because I don't remember what that piece of technology was - it could have been a SmartArray controller, an earlier one maybe, or perhaps it was the HSG80 storage controller. However, that particular hardware had originally been designed by _DEC,_ so at some point "post-merger" (HP + Compaq) I upgraded its firmware and noticed it doing this ANSI-style overwriting of the old _Compaq_ identification with the new _HP_ one after a few moments, just like you mentioned. But what it would also do and had already done before the firmware upgrade was displaying a _DEC_ tag before the _Compaq_ one, making it actually go from _DEC_ via _Compaq_ to _HP._ So booting up that system was always kind of a quick tour through several years of computer history, all via one single POST initialization message.
I really enjoyed the way you spoke about the convenience of cloud computing nowadays from an admin perspective. Another certified clabretro classic. I cut my teeth on the 1RU versions of the G4 or G5 of these, I can't exactly remember. It was prob ESXi 4 with a bunch of windows VMs... good times. Last time I saw one in the wild was a DL580 G5 2RU which was my old jobs SQL server, decommed and now just another Hyper-V vm! I would've taken it home but it was a loud and heavy beast... kind of regretting it now...
Aaaahhhh your a developer. Thought you were a server guy but every now and again you said something that made me think there's no way you cant be. Your not a bad server guy for a code monkey. Great vid too, look forward to more as I just subbed. You are pronouncing ilo correctly too.
Nice to see some folks showing some old servers some love, Servers are the most unloved of all computing equipment. I would collect them myself except i just lack appropriate storage for them without damage.
cant believe im this early, I love your content and its always great to get inspiration for my lab :) keep it up! thank you for keeping me entertained haha
Great to see the old kit working in 2023... that in itself says a lot about the quality of these machines. HP still makes excellent hardware and when you have a decent number of machines the support is definitely better. The Gen10 machines are well built, have similar config diagrams (very handy for RAM population) and rock solid performance. IMHO the ability to hot swap is not as good, however in the last 5 years we've only replaced a handful of chassis fans with the last few able to be swapped whilst the machines are on (even though they did complain).
Great memories from this video. a LAMP stack ha ! What a beast of a machine you have there, I think the bulging of the board is indeed the cause of the defect. Maybe you can try and trace some connections. Thanks for the great content, byeee !
Bowed pcb's are usually ok internally, the issue is surface mount caps and resistors sometimes pop off. Or really anything soldered to the board that doesn't have the same ability to flex can crack its solder joint.
Amazing video! Small question, why not keep the kvm in the server room and access the connected machines via the network to the kvm? Is local connection (vga and ps2 direct to the kvm) that much better? Thanks for another great vid and hope you have a great day!
great question, this KVM has pretty outdated remote management software I have to access via an XP VM and the video quality is low, so the local hookup is better, especially for screen capture. but I might see how far I can update it though, maybe there are newer software versions available.
@@clabretro ah yep, makes sense! That remote extension thingy will help with having to run only one line back there though! were you able to source more adapters for hooking up more servers?
I actually set up several PHPbb servers in the mid 2000's, but I couldn't even convince my friends to post on it... The most use I got out of one of them is when I convinced some classmates in my linux OS class to use it for the group project we were working on...
It's interesting to see the shared design language between that machine and a DL585 G2 I've got. It's basically the DL580's slightly bigger AMD Opteron brother with lots more RAM slots (but no hot swap) A more symmetrical layout, SAS and PCIe. But other than that, very similar!
I was definitively running a website on our old Pentium 4 machine back in 2005, but I was running Lighttpd rather than Apache. It was running on Ubuntu. I spend most of my time developing my own website in raw HTML, PHP, CSS and Javascript using Notebook++, with some convoluted SSHFS setup to be able to access files locally from my laptop. Good times.
Loved the coverage of this Proliant, thanks for sharing! I’m currently transitioning to more infrastructure work in my role and primarily deal with HP ProLiant & Procurve hardware. I’d be really interested to see you cover their switching gear in the future if you have any plans for that!
@@clabretro Great to hear, thanks for replying. I know we're currently still supporting switches as late as 2610 and 3500yl's and updating VLAN configurations from a previous network manager, which has been confusing to unpick due to HP's mixed use of the term "tagging" and "untagging". Recently decommisioned a 2524 which had done many years of service. Looking forward to it!
4 Xeon 7040 CPUs with 2 cores each at 3.00ghz and 165 watts of TDP per chip. Gotta love that Netburst powa!! This kinda reminds me of an older Dell Precision 670 workstation I got for dirt cheap in the early 2010's. I won the lottery with the motherboard revision and was able to upgrade it to two dual core xeons paxville chips with hyperthreading. Thing was insanely crazy hot but fun to mess around with, especially when my budget Phenom x3 I had at the time ran circles around it and only used 65 watts of power.
Mezzanine cards originally (and some still do) sat atop a motherboard/mainboard, often on standoffs so kind of like a little mezzanine :) And I agree with what some others have expressed: HP's excellent mechanical design is usually let down by the actual thing running inside the chassis. I always assumed the inner instruction stickers were done so well because they knew their field techs would be in there so often.
A mezzanine is a daughter board parallel to and very close to the system board. Like a mezzanine in a building is between two floors, a mezzanine in a computer is not quite the system board, but also not really a classic expansion card.
@@Dummvogel Yep. Its still really common to find RAID mezzanine boards. It lets them sell the system with out a raid controller but it lets them have better integration into the motherboard then a standard PCIe expansion would let them have. Like you can have the onboard sas ports be controller by the mezzanine board if its plugged in.
We had these back in the day for a packet sniffer tool for monitoring what people were doing in a system containing sensitive data. Needed the speed and resilience these offered. We had such a big discounts with HP that our local supplier charged us less than they paid to the supplier network and then got a refund from HP of the difference 😂 Before the 2007 global banking crisis the money we spent on hardware was crazy🎉
I"m sure you found out, but it won't stay as loud once you get it past POST. Hopefully you got iLO working and then no need for a KVM. Ah, I see you were able to get the iLO setup. Not sure what features the iLO of those days supported, but not not only does it fully mirror the same things the KVM would show you, but you can mount an ISO to patch (like an SSP) and/or install from.
Oh man bringing back some good memories! 2005 was around the time I was running my first website. I was more of a fan of Invision Power Board than phpBB back then! By the way, you can search around and find a license key for that old iLO to unlock the "advanced" features. HP (and Dell) still do that nonsense today (but they'll let you at least use it to install an OS and use it to watch the server POST. After POST, the free iLO/iDRAC will kick the console.). HP and Dell are why I love Supermicro servers. Their IPMI is usable without a license (you just can't do a remote BIOS upgrade and maybe a couple other things), they'll accept any compatible hardware without major complaints or throwing a temper tantrum like HPE servers like to do for even something as simple as storage without HP's blessing. You're making me want to dig out the old servers in my attic now!
haha that's awesome. some other folks mentioned about finding the iLO keys, it might even have one on a sticker on the machine I just noticed. I hear you on supermicro... way more flexibility. what gear do you have hiding in the attic?
Wow, that looks familiar (sort of)). I had the Compaq variant of the DL580 Gen 2 (5u server), back in the late 2000's. We used it as a fileserver and database for the local school at the time. They were pretty bulletproof except for the SCSI drive sled contacts were kind of troublesome.😉
The ILO errors may indicate that the license is not set because the NVRAM ran out of power. There should be a license key as part of the iLO label, ,along with the default admin password and username. When I got my second hand ProLiants, I had to reset the iLO license due to that NVRAM issue.
Woah! Hold on there...don't forget about VMware. It was and still is a very important part of the landscape, especially pre cloud and docker stuff. We ran tons of the old HP esx clusters before Cisco UCS came along.
Thanks! With just the power supply plugged in running the iLO (machine not turned on) it pulls 36W, but when it's powered up it draws anywhere from ~380W-440W. I should have put that in the video!
When I started at my current school job, the school had a ML380 Gen 5 server as the main server running Netware 6.5. It was my first time working with a proper server and it was beautiful. However Netware was seriously unstable on that system in 2009. Still have the unit sitting in out storeroom, with no idea what to do with it. Draws too much power to be a useful system and the 36GB SAS drives are too small really for much of a modern OS boot volume. Still, from a physical layout and labelling point, it was beautifully done
The kvm keyboard and mouse connection can be refreshed through the online ui with the Command and PS/2 setup selections. I have to periodically do it for some servers connected to my Avocent kvms.
About 10 years ago I purchased a used HP dl585 G1. This uses the amd socket 940 processors. What I found out later on is the even though they are 64bit processors. They didn't have the AMD-V extension. Thus, running virtual machines on it via 64 bit is a no go. Another thing is the server did not officially support windows 2008r2. I can't remember why. I know the server installation discs didn't support it. But, I was able to install 2008r2 with no issues. Other people around 5 years before that had no issues either. One last note is I finally picked up a rack mount kit for it. The prices on e-pay came down enough.
I had a dl380, I think I found either a way to get a trial Ilo key or a Kayden to let me manage it remotely. You should revisit your fancy kvm, can you boot a server off the usb now that you have a usb module?
I feel like that one was one of the last of what was the standard back in the very late 90s to about 2005 or 6 or so. Machines like that were the pinnacle of "large format" rack mounted servers. Later units shrunk and just didn't have all of the same options. Of course, ProLiant was originally Compaq's name/line. But after the buyout, HP took the name. Until it finally died, we used to run Novell on large boxes like that. They would be file and print servers for 1000 - 5000 users.
I'm curious how this will run Ubuntu server 22.04LTS. I have a Dell server of a similar vintage running it. Does amazingly well, crazy to think how far back the 64bit CPU extension goes.
DL580.... Only complaint I have is, we always ran like for like make/models when doing development. Always had exactly the same stuff env to env. REALLY neat video.
The DL760s had the same memory swap feature. I never saw it before in the DL580s. Anyway, back in the day the hot swap of memory modules didn't always worked as expected, complete with machine crashes during the procedure. So, we ended up just taking the host down and power off for any ram replacement.
The school I work at rn still use this equipment nowadays since power bills aren't a problem. You are such an expert on these things! In my case I had to learnt it from zero using manuals hahah. I like to install modern Proxmox in these things, as is debian based, and can run almost everything on everything. You just have to install it on top of a fresh debian install so it uses qcow2 files and thus doesnt eat your RAM with that ZFS crap. Btw, I recommend just using the firefox archive to download a compatible old browser. And java 8 binaries are all around so it should be no problem at all. Also, you could flash the HBA on this thing, it probably supports IT mode, and modern SATA disks up to 2TB.
Oh man! phpBB takes me back! I ran a semi successful gaming community with a phpBB forum starting in December 2005 and regrettably ended up moving to vBulletin. All that's left now is good memories and a Discord "server". Also that KVM multiplexer is awesome! It brings back memories of the daisy chained KVM setup we had in our racks for work. One day the multiport IP KVM died and as half of everything by that point had iDrac, a one port IP KVM daisy chained with other multiport KVMs did the trick but the IP KVM software had a product key... on the back of an installation CD... left in the data centre, so I called in a favour and let's just say the jnlp could be easily massaged to bypass the ridiculous product key prompt. I'm not sure what sort of psychopath would try and DRM software that only works with proprietary hardware and you need it to work when you've had a catastrophic failure. Anyway sorry for the text wall. TLDR, great memories, love the adventure, looking to see what you come up with next!
Oh, I can confirm that you absolutely get plenty of dust in a "proper" data center. It's not like a semiconductor clean room. Colocation providers use roughly the same quality air filters on CRAC units that you use at home, probably in the MERV 8-13 range. The air does get run through the filter a lot more than you would see in a home or office, so there is perhaps less dust buildup, but it's still there for sure. However, you usually don't have enough moisture in the air to cause any rust concerns... more modern data centers do humidify the air a bit (counteracting the dehumidification provided by the air conditioning) in order to minimize ESD, but not to any extreme degree. Most drive caddies on more recent servers even still use the same kind of leaf-spring design to center the caddy in the bay and to deal with vibration. But it is unusual to see a server in use for 25+ years, which is what would be required to build up that much dust under those springs.
Installing phpbb brings back a lot of memories. Thank god I moved out from web dev :) As a side note, I could've owned an old G1 DL something (much more older than this, I think it had 2xXeons in slot 1), but the seller didn't respect the auction price, and wanted more for it. Sad :(
Stunning machine. Shame about the memory board. At least you got a working server. LOL, that server lid is almost as satisfying as racking a pistol slide 😆
if it makes you feel better, i set up a PHPBB forum on my raspberry pi 1b like last year just to see if it could and it did indeed could, but i really shudnt because it was very easy to overwhelm
wow, 64bit PCI-X slots and SCSI, haven't seen those for a while. good piece of hardware but i guess power consumption will be a concern if you want to keep it running.
I think I have some memory modules that would fit one of these from this machine's bigger brother (a 6U unit?). Help me get a hold of you, and we can figure out how to ship them to you if you want them.
I got to thinking about it, and what would be super cool would be a comparison of compile times for the Linux kernel (might have to be an older one, not sure) VS a modern 4 core CPU. This thing is nuts, but I wanna see HOW nuts haha
Oh man, LAMPP! I actually used WAMPP as recently as 2017 for a programming class. Heck, my job as recently as 2020 ran XAMPP for hosting. If that's all you need, it's still pretty simple. Thank goodness for Node JS and sinple servers though. (For those wondering if you never used those tech stacks, those arent typos; there really are a bunch of versions for different machines)
Frankly, you talking about why a particular server "lid" is better than others, showing us the tricks they used and what makes that particular machine special, is EXACTLY why I watch your videos. Keep it up!
Haha thank you!
I work professionally in a mostly HPE datacenter environment. Seeing how so many features that exist today in the C-series, Synergy and newer ProLiant servers all originated with Compaq so long ago is really fun to explore. Unironically, the memory modules are still the first to fail in almost all my servers still.
Ha, funny to hear memory modules are still an issue.
@@clabretro So annoying tbh. I had never seen such high failure rates until I worked with this gear. I was always a Dell guy until I worked here lmao.
It's hard to believe that stumbling my way through hosting a phpBB forum in 2005 basically paved the way to the network admin career I'm in today. Nice trip down memory lane with the LAMP install and head scratching when they don't communicate properly. Love the content and love seeing old hardware (especially stuff that I've never heard of) get a new lease on life. Keep up the great work!
Thanks for watching! It's been fun hearing how many other people messed around with phpBB back then.
It can go both ways. Went from running several dial-up BBS boards to running websites with PHPBB and even remote hosting before AWS. But it didn't lead to a career for me. Wish it had.
31:48 Man, I love the "news" on that HP website. Literally "Itanium's future is promising"...
haha I know
Thanks for exploring all this heavy junk for us. The kid in me would have loved to play with all this stuff, but as I get older I lost some enthusiasm, but especially lost my patience for lugging these boat anchors around. The extra energy you have to set them up is just completes the trifecta of what I'm missing.
I guess what I'm saying is I like this stuff but I have no energy or time to mess with it, so thanks for letting me live vicariously. Cheers
That means a lot, yeah I totally get it. It can drain you haha.
It's interesting to see how quick hardware swaps, down to the CPU level, were important design elements before modern virtualization really took hold in the server space.
Nowadays no one would really care too much about being able to swap out a CPU cage fast because if something goes wrong with a physical machine, you'd just migrate the VM to a different one, but back then it could have been incredibly important to get up a particular machine as fast as possible.
Exactly, now days you'd have a hundred of these things running and wouldn't care when one failed.
I remember when one of the HP server where I work had a faulty PSU, our contractor shipped out a whole new server.
I was genuinely shocked by this because I had an expectation that something like the PSU would be a relatively simple in field replacement.
heh kinda defeats the purpose of the modularity
New best tech channel on this hell hole we call RUclips.
HP is miserable to deal with but the fit and finish of their servers is genuinely exceptional. I bought a DL325 Gen 10 for work and was amazed by how well and easily everything went together. The lid latch, drive mounting mechanism, rails, and component modules are head and shoulders above any other vendor I've ever used.
A bummer they suck to deal with, you're not the first person to mention that. This thing is almost 20 years old so I don't know about the new ones but it's super well built.
I always buy outgoing generations so I've never had to deal with HP, but man I agree I absolutely love my HP Servers. ILO is generally reliable, and parts are SO easy to get for them
most of HP's design/manufacture tech from the early years, came from Compaq acquisition. if you've worked with compaq servers, you can spot various things in old hp servers immediately. early hp own server tryouts were pieces of crap. fit and finish is one of the few things they kept at very good levels but their customer support and feature lock (in some of the stupidest ways immaginable) is pain. in terms of reliablility, i have mixed cases which boils down to: if you don't have early problems, you won't have at all. if you do, you will always have. but i guess this is what you have to deal with, when you're into the "affordable" part of the enterprise market. meh.
Yeah, from personal experience, HPE Support is absolutely dreadful. It's why we switched to DELL.
@@giornikitop5373 the feature locks in ILO are very dumb, but I just spend $5 on eBay and get the ILO advanced keys for my servers lol
I work in the ITAD industry and often come across legacy units. It's always fascinating seeing the progression and design philosophy as technology advances
LOL, I was sitting here working on my ProLiant DL385 G5 when I started watching this video, only to find you using the old RAM from it in this one! :-)
Haha it all worked, kinda saved this video. Thanks again!
Man, you don't stop! I like your knowledge and presentation pace here & in your other videos.
Thank you!
"YESSS" is what I just muttered under my breath when I saw there's a new clabretro tonight. Love the content my man!
Haha that means a lot, wish I was able to get it out a little earlier!
Man, I remember those days. Got my start in Linux using Ubuntu 8.04. I haven't seen a single node LAMP stack in a long time =D
Love the videos!
8.04 was great. Thanks for watching!
Ever since I started using HP servers in the mid 2000s, I have loved their servers. They are always easy to get into, easy to remove parts, everything is so easy to work on
Great video. I thought it was really cool to see how much redundancy was built into these systems back in the day. Nowadays all our servers are just cattle - memory can fail the server goes offline and nothing happens
Exactly what I was thinking. These days if you're still an on premise shop you're not just running one server, you might have three (or more) servers behind a load balancer. So if one goes down, everything just keeps trucking with the other two. If the load balancer goes down, the entire thing fails over to the backup cluster.
I work for a company that owns two small DCs, we are just NOW putting these into production running OmniOS and bhyve as hypervisors attached to 3 large disk shelves.
Wow.
I gotta know how that worked out. Still running?
WTF lol
I ran a forum, hosted on a dialup connection with dynamic DNS to keep it "available" through reconnects back in the early 2000s. But I was an ASP weenie back then rather than PHP. Later I moved to Perl when I started messing about with FreeBSD more, and now Python.
Fun stories from that time of my life. I used to attend a LAN gaming event at a local community centre, and got involved with running it. Later on I wrote an online booking management system to help manage the attendees and make sure we didn't go over capacity. That was in ASP of course. 😎
ha thats awesome
Nice work. I worked for a Compaq dealership in the early 2000’s I never saw one of those machines in the flesh but it’s very similar to the things I did see. I hated the smart start disk that was needed to boot Compaq/HP servers. Be thankful yours was already configured. Great content as always.
Thanks! Yeah I was relieved I didn't have to do any smart start stuff, had to do that on the gen1.
Holy heck PHP brings back memories of school. I did a GCSE in compsci and wrote our coursework project in PHP.
PHP was a joy (and nightmare) to use and I do and don’t miss it. In many ways the hot reload stuff I have with the JS stuff I do now is similar but that simple F5 and it just works? Can’t beat it. No config needed!
On the drive sled rust, vibration will wear through the zinc plating exposing the steel. Exposed steel rusts quick in air.
I always loved HP Proliant Servers. Very easy to service. Also very good at indicating what is broken or not working properly.
Pricing it out was awesome. Made my night
The 3rd 4th and 5th 4u and 2u proliants were so damn nice to work in.
I watch your videos because of your attention to detail and I can appreciate good engineering.
Used to build these back in the day at Hp in Erskine. The reason the DL580 G3 gained so many of these features is because it was a consolidation product, it moved up a tier from the G2 in that it entered the quad socket arena, as it was also a replacement for the old DL740 & DL760G2 high end servers at a much lower price. For such an expensive intel based mid/high end server we built an awful lot of them as people were still unsure of the higher performing AMD based DL585 until the G2 version of that was released.
Fascinating. Yeah this thing is absolutely maxed out with features.
Okay, I must admit that you are making me reminisce about the old days. Totally nerding out here! ✌️
glad to hear it!
Cool machine! I'm curious if the errant memory board would work if you use another material to insulate the bowed board from the caddy. Anti-static bags are actually slightly conductive themselves - this is how they create the anti-static protection for the bag's contents. I'd try again with paper or vinyl and see if you get any different results.
Well interesting, I'll try that!
Here's a good explanation of pink anti-static vs. silver static shielded bags
ruclips.net/video/imdtXcnywb8/видео.html
The LAMP phpBB stuff is taking me back to setting up a forum for my World of Warcraft guild in college. Incredible.
Nice!
Oh you're a Perl buddy 😊 I used Perl extensively between 2001 and 2007. Very fast and versatile
I used it a lot briefly around 2009 or 2010, have a bit of nostalgia for it
This video was so entertaining and informational. IMO, you’re already making videos at the level of someone with many millions of subs, so… imo, do not change (: So cool to see what the world ran on in the mid 00s while i was in grade school.
thank you!
Back in the day I was a server tech in the banking system. Deployed hundreds of these boxes. That and DL380s. Being the bank, they always had these fully populated. Can't imagine the total value of what passed through my hands. Those 580s were rock solid. And yes you are pronouncing ILO properly ✌️
Very interesting to see how x86 server manufacturers were trying to differentiate themselves as the platform became such a commodity product. Even at the time I wonder if amortizing the cost of developing a fancy redundant memory system over sales versus "just buy a second server" ever worked out... Great video as always!
Definitely now, but hard to say back then. Amazing how expensive it all was back then. Thanks for watching!
@@clabretro Back then, you were a Dell shop, an HP shop (or a Compaq shop), an IBM shop or a no-name shop, mainly dependent on who got there first, based on the remote management platform you would have standardised on (or whatever the contractor would supply if you were in government at the time). This server would have been either an application server, a database server or an ESX server (though that came a little later). For these applications, having the redundancy was a life-saver (and well worth the money).
I would agree with you on the price statement, but you should try and look up some enterprise software list prices from those days ;-)...
I love the designs and of HP products as i work on them, however hp can also be terrible to deal with.I personally deal with them so much i decided to buy a dl360p gen 8! Wonderful video, very well done keep it up!
Thanks!
Antistatic bags are conductive! That’s how they suppress the static.
Ha - I'll try something else then, good point!
@@clabretro Unfortunately based on the state of the board, you are likely screwed anyway - but thought I’d mention it regardless 😅
Haha yeah it's pretty warped
that great lid on that Gen 3 is also on my DL360e G8, its fantastic as it makes maintaining that thing and its temperamental memory very easy
I worked a few of those ProLiant. Nice machines. Good assembly too.
I had the towers. Very heavy beasties!
This is what peak performance looks like boys
30:35 Programming only from documentation, or some book is really cool. It reveals designers point of view. I can read Perl because of Nagios plugins and it can be done without pasting from stack over and over.
I'm still using a crap of phpBB and made a painful transition from PHP 5.6 and ~2018 phpBB version to a newer one. It is 18 years old now. The amount of specialized knowledge over there has no cloud substitution.
Ran dl380,dl580,dl980 servers over the years, on top of the c7000 and newer synergy systems. The dl580 boxes were good boxes but the memory speed was slower to match the slower Xeon speeds. Moved one Oracle RAQ cluster from dl580 to dl380 nodes when the DB needed more oomph in the cache and memory department over core count.
I used to run the local VW User Group on phpBB back in 2002-2006 ish!
nice!
RE the HP/Compaq split identity. Post merger till at least 2006, any Proliant 300, 500 or 700 series was the team out of Houston from the Compaq side. They weren't in a rush to cover up all the Compaq logos with HP ones, and I specifically remember the SmartArray firmware for a time displaying Compaq in red text briefly before flipping to HP in blue during boot. The 100 series were at times some of the pre merge HP server folks IIRC, but overall the Compaq side took over the x86 server designs in the post merger company. HP side ran Itanium, even though the Proliant team had built one Itanic box, the DL590 prior to the merger.
Had fun supporting that era of servers and storage tech, DEC and Compaq's legacy in those spaces resulted in some solid designs and support organizations.
Very cool to hear about that. Definitely a fascinating overlap with HP getting rid of their offering and bringing Compaq's in.
I'm telling you something: There was at least one piece of storage equipment that did exactly what you described. Now I'm being deliberately vague here because I don't remember what that piece of technology was - it could have been a SmartArray controller, an earlier one maybe, or perhaps it was the HSG80 storage controller. However, that particular hardware had originally been designed by _DEC,_ so at some point "post-merger" (HP + Compaq) I upgraded its firmware and noticed it doing this ANSI-style overwriting of the old _Compaq_ identification with the new _HP_ one after a few moments, just like you mentioned. But what it would also do and had already done before the firmware upgrade was displaying a _DEC_ tag before the _Compaq_ one, making it actually go from _DEC_ via _Compaq_ to _HP._
So booting up that system was always kind of a quick tour through several years of computer history, all via one single POST initialization message.
I really enjoyed the way you spoke about the convenience of cloud computing nowadays from an admin perspective. Another certified clabretro classic.
I cut my teeth on the 1RU versions of the G4 or G5 of these, I can't exactly remember. It was prob ESXi 4 with a bunch of windows VMs... good times.
Last time I saw one in the wild was a DL580 G5 2RU which was my old jobs SQL server, decommed and now just another Hyper-V vm! I would've taken it home but it was a loud and heavy beast... kind of regretting it now...
Thanks! These things are ridiculously heavy (I know I say that about all of them, but I really mean it about this one).
Aaaahhhh your a developer. Thought you were a server guy but every now and again you said something that made me think there's no way you cant be. Your not a bad server guy for a code monkey. Great vid too, look forward to more as I just subbed. You are pronouncing ilo correctly too.
At my last job they still had some G4s running, these things just don't die.
Nice to see some folks showing some old servers some love, Servers are the most unloved of all computing equipment.
I would collect them myself except i just lack appropriate storage for them without damage.
cant believe im this early, I love your content and its always great to get inspiration for my lab :) keep it up! thank you for keeping me entertained haha
Thank you!
Love the design of the thing, super neat system :)
Great to see the old kit working in 2023... that in itself says a lot about the quality of these machines.
HP still makes excellent hardware and when you have a decent number of machines the support is definitely better. The Gen10 machines are well built, have similar config diagrams (very handy for RAM population) and rock solid performance. IMHO the ability to hot swap is not as good, however in the last 5 years we've only replaced a handful of chassis fans with the last few able to be swapped whilst the machines are on (even though they did complain).
Great memories from this video. a LAMP stack ha ! What a beast of a machine you have there, I think the bulging of the board is indeed the cause of the defect. Maybe you can try and trace some connections. Thanks for the great content, byeee !
Bowed pcb's are usually ok internally, the issue is surface mount caps and resistors sometimes pop off. Or really anything soldered to the board that doesn't have the same ability to flex can crack its solder joint.
ah, watching the old iLO takes me back. the good old days.
I don't miss having to have Java or ActiveX. Native HTML is so nice.
Amazing video! Small question, why not keep the kvm in the server room and access the connected machines via the network to the kvm? Is local connection (vga and ps2 direct to the kvm) that much better? Thanks for another great vid and hope you have a great day!
great question, this KVM has pretty outdated remote management software I have to access via an XP VM and the video quality is low, so the local hookup is better, especially for screen capture. but I might see how far I can update it though, maybe there are newer software versions available.
@@clabretro ah yep, makes sense! That remote extension thingy will help with having to run only one line back there though! were you able to source more adapters for hooking up more servers?
yeah I've got three with ps2 and one with USB... now that I'm typing this out I'm realizing I should've tried the USB one haha
I actually set up several PHPbb servers in the mid 2000's, but I couldn't even convince my friends to post on it... The most use I got out of one of them is when I convinced some classmates in my linux OS class to use it for the group project we were working on...
sounds about right haha
It's interesting to see the shared design language between that machine and a DL585 G2 I've got. It's basically the DL580's slightly bigger AMD Opteron brother with lots more RAM slots (but no hot swap) A more symmetrical layout, SAS and PCIe. But other than that, very similar!
Gotta love that classic Microsoft natural keyboard!
I was definitively running a website on our old Pentium 4 machine back in 2005, but I was running Lighttpd rather than Apache. It was running on Ubuntu. I spend most of my time developing my own website in raw HTML, PHP, CSS and Javascript using Notebook++, with some convoluted SSHFS setup to be able to access files locally from my laptop. Good times.
nice retro LAMP setup! would be cool to see some Perl scripts too
I know. I need to get some Perl CGI scripts going.
Loved the coverage of this Proliant, thanks for sharing!
I’m currently transitioning to more infrastructure work in my role and primarily deal with HP ProLiant & Procurve hardware. I’d be really interested to see you cover their switching gear in the future if you have any plans for that!
would definitely like to cover early procurve gear eventually!
@@clabretro Great to hear, thanks for replying. I know we're currently still supporting switches as late as 2610 and 3500yl's and updating VLAN configurations from a previous network manager, which has been confusing to unpick due to HP's mixed use of the term "tagging" and "untagging". Recently decommisioned a 2524 which had done many years of service. Looking forward to it!
4 Xeon 7040 CPUs with 2 cores each at 3.00ghz and 165 watts of TDP per chip. Gotta love that Netburst powa!! This kinda reminds me of an older Dell Precision 670 workstation I got for dirt cheap in the early 2010's. I won the lottery with the motherboard revision and was able to upgrade it to two dual core xeons paxville chips with hyperthreading. Thing was insanely crazy hot but fun to mess around with, especially when my budget Phenom x3 I had at the time ran circles around it and only used 65 watts of power.
165 TDP per CPU?! damn that thing is a literal heater in a box
I was like, this is awesome! And also, my server rack is full already I don't need any more hardware
Mezzanine cards originally (and some still do) sat atop a motherboard/mainboard, often on standoffs so kind of like a little mezzanine :)
And I agree with what some others have expressed: HP's excellent mechanical design is usually let down by the actual thing running inside the chassis. I always assumed the inner instruction stickers were done so well because they knew their field techs would be in there so often.
A mezzanine is a daughter board parallel to and very close to the system board. Like a mezzanine in a building is between two floors, a mezzanine in a computer is not quite the system board, but also not really a classic expansion card.
@@Dummvogel Yep. Its still really common to find RAID mezzanine boards. It lets them sell the system with out a raid controller but it lets them have better integration into the motherboard then a standard PCIe expansion would let them have. Like you can have the onboard sas ports be controller by the mezzanine board if its plugged in.
37:10 one of the most underrated programming comments ever.
We had these back in the day for a packet sniffer tool for monitoring what people were doing in a system containing sensitive data. Needed the speed and resilience these offered. We had such a big discounts with HP that our local supplier charged us less than they paid to the supplier network and then got a refund from HP of the difference 😂 Before the 2007 global banking crisis the money we spent on hardware was crazy🎉
I would love to see a lab tour and see what you're doing with all the Dreamcasts ^..^
I was thinking of a tour type video eventually, might get one sometime!
I"m sure you found out, but it won't stay as loud once you get it past POST. Hopefully you got iLO working and then no need for a KVM. Ah, I see you were able to get the iLO setup. Not sure what features the iLO of those days supported, but not not only does it fully mirror the same things the KVM would show you, but you can mount an ISO to patch (like an SSP) and/or install from.
I found a full iLO license key on the case later haha.
Oh man bringing back some good memories! 2005 was around the time I was running my first website. I was more of a fan of Invision Power Board than phpBB back then! By the way, you can search around and find a license key for that old iLO to unlock the "advanced" features. HP (and Dell) still do that nonsense today (but they'll let you at least use it to install an OS and use it to watch the server POST. After POST, the free iLO/iDRAC will kick the console.). HP and Dell are why I love Supermicro servers. Their IPMI is usable without a license (you just can't do a remote BIOS upgrade and maybe a couple other things), they'll accept any compatible hardware without major complaints or throwing a temper tantrum like HPE servers like to do for even something as simple as storage without HP's blessing. You're making me want to dig out the old servers in my attic now!
haha that's awesome. some other folks mentioned about finding the iLO keys, it might even have one on a sticker on the machine I just noticed.
I hear you on supermicro... way more flexibility. what gear do you have hiding in the attic?
Wow, that looks familiar (sort of)). I had the Compaq variant of the DL580 Gen 2 (5u server), back in the late 2000's. We used it as a fileserver and database for the local school at the time. They were pretty bulletproof except for the SCSI drive sled contacts were kind of troublesome.😉
Honey stop the car new mid-2000s server video just dropped
The ILO errors may indicate that the license is not set because the NVRAM ran out of power. There should be a license key as part of the iLO label, ,along with the default admin password and username. When I got my second hand ProLiants, I had to reset the iLO license due to that NVRAM issue.
oh interesting, I think I do have an iLO label. I'll try that out
@@clabretro - you can often find ancient iLO Advanced license codes online...
Woah! Hold on there...don't forget about VMware. It was and still is a very important part of the landscape, especially pre cloud and docker stuff. We ran tons of the old HP esx clusters before Cisco UCS came along.
Great boys! I'm Interested in knowing the wattage/power draw, thanks
Thanks! With just the power supply plugged in running the iLO (machine not turned on) it pulls 36W, but when it's powered up it draws anywhere from ~380W-440W. I should have put that in the video!
When I started at my current school job, the school had a ML380 Gen 5 server as the main server running Netware 6.5. It was my first time working with a proper server and it was beautiful. However Netware was seriously unstable on that system in 2009. Still have the unit sitting in out storeroom, with no idea what to do with it. Draws too much power to be a useful system and the 36GB SAS drives are too small really for much of a modern OS boot volume. Still, from a physical layout and labelling point, it was beautifully done
The kvm keyboard and mouse connection can be refreshed through the online ui with the Command and PS/2 setup selections. I have to periodically do it for some servers connected to my Avocent kvms.
oh great tip, I'll give that a try
About 10 years ago I purchased a used HP dl585 G1. This uses the amd socket 940 processors. What I found out later on is the even though they are 64bit processors. They didn't have the AMD-V extension. Thus, running virtual machines on it via 64 bit is a no go. Another thing is the server did not officially support windows 2008r2. I can't remember why. I know the server installation discs didn't support it. But, I was able to install 2008r2 with no issues. Other people around 5 years before that had no issues either. One last note is I finally picked up a rack mount kit for it. The prices on e-pay came down enough.
It prob doesnt even have enough RAM and cpus to run VMs anyway hahah. But what about containers?
I had a dl380, I think I found either a way to get a trial Ilo key or a Kayden to let me manage it remotely.
You should revisit your fancy kvm, can you boot a server off the usb now that you have a usb module?
No luck with that USB module, I think my USB SIP must not actually support it.
I feel like that one was one of the last of what was the standard back in the very late 90s to about 2005 or 6 or so.
Machines like that were the pinnacle of "large format" rack mounted servers. Later units shrunk and just didn't have all of the same options.
Of course, ProLiant was originally Compaq's name/line. But after the buyout, HP took the name.
Until it finally died, we used to run Novell on large boxes like that. They would be file and print servers for 1000 - 5000 users.
I'm curious how this will run Ubuntu server 22.04LTS. I have a Dell server of a similar vintage running it. Does amazingly well, crazy to think how far back the 64bit CPU extension goes.
Oh the memories!
DL580.... Only complaint I have is, we always ran like for like make/models when doing development. Always had exactly the same stuff env to env. REALLY neat video.
The DL760s had the same memory swap feature. I never saw it before in the DL580s. Anyway, back in the day the hot swap of memory modules didn't always worked as expected, complete with machine crashes during the procedure. So, we ended up just taking the host down and power off for any ram replacement.
The school I work at rn still use this equipment nowadays since power bills aren't a problem. You are such an expert on these things! In my case I had to learnt it from zero using manuals hahah.
I like to install modern Proxmox in these things, as is debian based, and can run almost everything on everything. You just have to install it on top of a fresh debian install so it uses qcow2 files and thus doesnt eat your RAM with that ZFS crap.
Btw, I recommend just using the firefox archive to download a compatible old browser. And java 8 binaries are all around so it should be no problem at all.
Also, you could flash the HBA on this thing, it probably supports IT mode, and modern SATA disks up to 2TB.
Oh man! phpBB takes me back! I ran a semi successful gaming community with a phpBB forum starting in December 2005 and regrettably ended up moving to vBulletin. All that's left now is good memories and a Discord "server". Also that KVM multiplexer is awesome! It brings back memories of the daisy chained KVM setup we had in our racks for work. One day the multiport IP KVM died and as half of everything by that point had iDrac, a one port IP KVM daisy chained with other multiport KVMs did the trick but the IP KVM software had a product key... on the back of an installation CD... left in the data centre, so I called in a favour and let's just say the jnlp could be easily massaged to bypass the ridiculous product key prompt. I'm not sure what sort of psychopath would try and DRM software that only works with proprietary hardware and you need it to work when you've had a catastrophic failure.
Anyway sorry for the text wall. TLDR, great memories, love the adventure, looking to see what you come up with next!
haha great story about the KVM. thanks for watching!
Oh, I can confirm that you absolutely get plenty of dust in a "proper" data center. It's not like a semiconductor clean room. Colocation providers use roughly the same quality air filters on CRAC units that you use at home, probably in the MERV 8-13 range. The air does get run through the filter a lot more than you would see in a home or office, so there is perhaps less dust buildup, but it's still there for sure.
However, you usually don't have enough moisture in the air to cause any rust concerns... more modern data centers do humidify the air a bit (counteracting the dehumidification provided by the air conditioning) in order to minimize ESD, but not to any extreme degree.
Most drive caddies on more recent servers even still use the same kind of leaf-spring design to center the caddy in the bay and to deal with vibration. But it is unusual to see a server in use for 25+ years, which is what would be required to build up that much dust under those springs.
Curious how the 4 CPU work with only two ram modules and not even being fully populated
yeah I was wondering about that, super interesting
Installing phpbb brings back a lot of memories. Thank god I moved out from web dev :) As a side note, I could've owned an old G1 DL something (much more older than this, I think it had 2xXeons in slot 1), but the seller didn't respect the auction price, and wanted more for it. Sad :(
Bummer. That DL380 I have has two Pentium IIIs. Yeah, web dev is brutal 😂
That rust could be due to anodic-cathodic rot. I.e. Aluminium pressing against steel, possibly.
oh interesting point
Yup, that was what i was thinking as well... electrical corrosion.
Stunning machine. Shame about the memory board. At least you got a working server. LOL, that server lid is almost as satisfying as racking a pistol slide 😆
Have you come across any late model HP / Compaq Alpha servers? I have been looking for when they exceeded 1ghz.
I haven't but those are really cool
if it makes you feel better, i set up a PHPBB forum on my raspberry pi 1b like last year
just to see if it could
and it did indeed could, but i really shudnt because it was very easy to overwhelm
The ProLiant was still around today under Hewlett Packard Enterprise.
yup, gen 11 I think
wow, 64bit PCI-X slots and SCSI, haven't seen those for a while. good piece of hardware but i guess power consumption will be a concern if you want to keep it running.
yeah this one doesn't stay running too long lol
I think I have some memory modules that would fit one of these from this machine's bigger brother (a 6U unit?). Help me get a hold of you, and we can figure out how to ship them to you if you want them.
Hey there, you can reach out to the email in the channel's about page.
Great suff!! keep going so I can convience my wife to buy one of these for my lab hahaha
😆
I got to thinking about it, and what would be super cool would be a comparison of compile times for the Linux kernel (might have to be an older one, not sure) VS a modern 4 core CPU. This thing is nuts, but I wanna see HOW nuts haha
I bet this one is slooooow with its original Xeons.
Oh man, LAMPP! I actually used WAMPP as recently as 2017 for a programming class. Heck, my job as recently as 2020 ran XAMPP for hosting. If that's all you need, it's still pretty simple.
Thank goodness for Node JS and sinple servers though.
(For those wondering if you never used those tech stacks, those arent typos; there really are a bunch of versions for different machines)
Oh yeah I thought about going on about all the different iterations, but figured good ol' PHP LAMP would be enough for one video haha.
@@clabretro oh you made the right choice! Way too easy to go down rabbit trails with any kind of software.
24:54 I have that same screwdriver, it is very handy!
Awesome. Hello from Rus!
ILO has always been my favorite out of band management. iDrac is OK CIMC can take a flying leap.
Also if you google you can find trial license keys to see the full ILO experience.
I had a feeling, was going to try that lol
That memory test (20:28) interface looked just like TempleOS