I ran these servers in the financial services industry for over a decade. They are built like tanks and are rock solid. The last model I was running was a P9 before I moved to an organization that no longer used them.
@@clabretroif you really want to have some fun (and have a serious power source) find an IBM blade center chassis with some JS22 blades in it and a SAN. These were monster P series setups.
IBM-POWER wow, I love it! Thanks. This non-s86/x64 CPU architecture is quite unique. Still alive with new products after so many years, whereas MIPS and SPARC have disappeared from the server market. 30 years after, IBM-POWER is still alive today, with POWER-10, that's very impressive.
I worked on these systems for years, until around the time this machine came out. If you can get a working HMC, you can do some cool things. You can split up your machine into LPARs (logical partitions) which are sort of like virtual machines but run on the hardware via a hypervisor instead of software translation. You can then split up your resources, like tenths of a CPU to each LPAR. Not much to split up, however with only two CPU cores. This video brings back a lot of memories. Some of the earlier high-end rack-mounted RS/6000 systems took about 45 minutes to boot to the SMS prompt, then you had to be there to hit the key to get into SMS so you could install the AIX OS. That was painful.
@@clabretro Some other "tricks", although I don't know what is supported on this lower-end, older system. 1. On some of the System p machines, you could upgrade the system firmware (similar to the UEFI/BIOS on PCs) without rebooting. I know you said you don't want to upgrade the system firmware, so that might not be something you would want to try. 2. Some of my colleagues were doing something where they could take a running process and transfer it to a different machine where it would take over starting with the next instruction. Both machines had to be connected to the same storage device. This also had an IBM acronym, but I forget what it was called. If this is at all supported on this machine, you could potentially attempt the same thing with LPARs. I used to refer to these features as "witchcraft".
@@clabretro Interesting you went with a rack-mount HMC. I have/had a mid-tower model ('had' in the sense that it eventually wouldn't recognize all it's memory slots, and I was stuck at 3G memory, so I turned it into a Ryzen 5 with some crap MSI board)
If you find an old HMC, you can get a root shell via man or less by going in to vi mode then executing a shell from there. They are locked down by default. It was fixed in newer versions, i forget which, but you might be lucky. The management UI is actually very very good (the old one at least).
You aren't helping disuade me from picking up some old enterprise equipment for the home lab. But i dont have a home lab at all! Living vicariously through you haha
That system just looks so slick. I know enterprise gear usually doesn't get a ton of love, but youve gotta admit that IBM made some darn attractive machines.
Welcome in the magic world of IBM-I platform (formerly known as AS/400, eSeries, Pseries, Power, Power-i). I ran that system from 16 years! The 5.25 bay is for tape backup unit. Fiber channel normally will be used for a SAN (most machine is diskless). Thanks for this time travel
The downside to these proprietary enterprise servers is they're built to be about as user unfriendly as possible - unobtanium interposers for disks, arcane menus on the display that are in numbers instead of text, other similar proprietary stuff. That way the customers are supposed to buy the yearly service contract and let IBM take care of all that "hard stuff". Pretty sure they make more money off the service contracts than they do off of initially selling the machine.
I love this stuff and I don’t really know why, I’m not old enough to have worked with this stuff. The videos are super well done, audio levels are lovely and even, and you shoot really well. Keep it up, I love these videos.
Just found your channel a few days ago and can now proudly say I’ve binge watched all your videos. Amazing content, been having a blast watching it. People are probably tired of hearing me talk about your channel hahaha
The SCA to SCSI-50/SCSI-68 pin card that you use requires external power with that molex connector on the card. It supplies power to the hard drive so that it can be seen. I have one of them in my 43P-150 to connect a SCA hard disk to the 68pin UltraWide SCSI channel in that machine. Those POWER5 machines were until recently the backbone of the FAA EnRoute system (ERAM). They have been replaced in the past few years with HPE Proliants running RHEL. Those P5s were used as deskside machines for air traffic controllers and the backend servers.
I forgot to mention in the video, I did try it with the molex connector. Drive spins up but I can't get the p5 to detect it yet, probably just the wrong jumper settings or something. That's amazing about ERAM -- doesn't surprise me these things ran for so long though. Super high build quality.
@@clabretro depending on where you are looking, from memory they won't show up in the menu you were looking at for boot device order unless they have a boot block from an existing installation. Check if the disk shows up in the CD install menu as an disk you can install AIX on.
@@clabretro those 10-15k RPM drives also run screaming hot, they're expecting to be in the enclosure with the air being pulled pass them. If you run them externally may want to have a fan blowing on it. I killed two 36GB raptors once when I left the front fan unplugged when doing some work.
I started working on Iseries (aka As/400) from 1998 and still do to this date (though now in Application support since 2017). Moving to IBM P10 next year. AS400 lives on!!! :) the statuss code are IPL (Initial program load) codes. For HMC access, 'IBM I Access client soltion' pgm for 5250 emulation (interactive jobs/sessions) and HMC tools, interface access.
Always like learning about these type of server hardware, never been exposed to one except seeing their ads in my dad's Business Week or The Economists magazines. Seeing how you get it up and running is fascinating.
I worked on these at Ford Motor - only the big refrigerator power5 main frames connected to IBM Shark storage. I was responsible for carving & deploying LPARs running Linux and AIX. and one of my best friends worked at ebmpapst (maker of those fan motors).
The reason for the long boot process is that the POWER5 processors cannot boot themselves. Rather another chip has to initialize everything due to the GX bus calibration stages upon boot. This is due to the GX bus running a division of the processor speeds vs. using a multiplier based off an external system clock. This required synchronizing clocks between all the processor dies. Note that the POWER5 die contains two cores, IBM would often put two dies into a single package for a quad core per socket. The speeds inside the package would be different than jumping out to an external socket which added another layer of synchronization complexity as they ran at different rates. While you didn't get a quad core model, the POWER5 does support SMT so you can run four threads simultaneously on this box. While the p-series ran AIX, IBM's Unix offering, it also supported OS/400 aka i5/OS aka IBM i. Previously IBM had two related but specialized versions of POWER and PowerPC that merged together with the POWER4. This is also where IBM started to add specific features for their PowerVM hypervisor in hardware, though with only two cores and four threads with 4 GB of RAM, virtualization is going to of limited usage. The 1994 copyright stems from the system's CHRP roots to run multiple operating systems (which included MacOS at one point in history!).
4:50 - Those "flaps" are one-way valves for airflow. If the fan stops working (and providing pressure) these flaps will close and prevent the other fans from circulation air through it.
That brings back memories. Around 2012, we salvaged a bunch of these machines for AIX testing. I've lost quite a few days troubleshooting error codes, only to discover randomly the RAM modules were between half dead and completely dead... Fortunately, secondhand RAM for these was cheap on ebay, and after that it was all good. I even install Gentoo for shits and giggles on one of the spares. I also quite liked our naming convention: Flower, Point, Austin because Flower Power, Power Point and Austin Power(s).
I really enjoyed plans B and C for trying to get the hard drive running, much of that same pain can be had in the industrial automation field! I've always wondered the differences between IBM System p and the IBM System i that I learned to program on in college are. Keep us posted with this IBM project, can't wait! Trying to hunt down an AS/400 system myself.
I did pick up some drives, a follow up + (hopefully successful) AIX install is on the list of upcoming projects! I'm always on the hunt for an AS/400 as well, hard to find.
My father did CAD engineering on a 6000 for a long while (after hand-drafting). I was kinda bummed when he said they switched to PCs for Catia V5. Still want to get one some day... I need some IBM RISC in my life!
I had a power 7 720 it was fun running IBM i on it along with aix. However let it go to the next homelab / tinkerer. Very robust and overbuilt to exceedingly awesome standards. Very quiet too after the full boot process.
The plastic thingies underneath the fans are there to prevent the airflow going the wrong way when you remove the fan. The air pressure from other fans would make air flow thru that hole otherwise. So they are not "dust filters"
IBM default password were usually set to abc1234 out of the box. Unfortunately CPU cores had to be paid for via IBM licensing. The hard drives the P5 require something called SAS drives.
@@clabretroLook forward to it. Remember that the HMC uses it’s own logins such as hmcuser and hmcroot. Usually preset Pword of abc1234. There is a root login and password. But they were kept by IBM and no one had access to them except IBM. HMCroot will give you enough for creating LPARs, adding RAM and CPU… But it’s very limited. The base O/S is based on Red Hat Linux. You could run scripts from CSM systems to contact the HMC, but HMCroot would only allow for so much… 👍🏻
The HMC is most likely an IBM X3550 that runs an HMC image and a special version of IMM. Tried converting a normal X3550 M4 to a 7042 HMC some years ago, fun times. IBM at some point put ThinkPads as local HMCs inside larger POWER mainframes, always wanted one to play around
@@francistheodorecatte Close but no cigar. The Thinkpads used in the Z mainframes was the SE (Support Element), the Z HMC would have have been some flavor of System X tower or x3550 M4 or Trenton (depending on the vintage). Again, depending on vintage the SE would have been a Thinkpad T60, T61, T500, T510, T520 or T530. The T60 was used on the z9, for example. System P did not need the SE, they leaned on the FSP and HMC for all of their system management functions.
Yeah that's a good idea, I'll end up trying that if I can't hunt down a hard drive. I just ordered some on eBay which hopefully have the right connector.
Man I have only worked at 2 places with these and it was everyone's favorite servers because of the few times we had to work on them it was insanely easy and we didn't fear power down on them.
I have a Power720 Express that I use as well! The hardware is definitely archaic (and the IBMi software even more so), but these things are second to none in build quality! You'll also be happy to know that the ASMI web interface hasn't changed a bit. XD
Ahhhh those are fun machines! I went back over the video, and wrote down some notes on bits I remember from working with them: 03:03 the rack indicator was an extra set of cables that came if you bought a rack with a bunch of those, so it'd show not only which machine to service (there's already an ident light besides it for that), but also which rack in a row needed to be opened! I never saw this installed on a customer's premises 04:00 that huge connector is an IBM GX+ connector, for "high-speed" straight-to-the-CPU, cross-system communication. 04:50 the slots, just like the blanks up front and through the machine, are to keep the airflow properly contained. These machines had very well-designed cooling solutions, and in this case if the fans stopped spinning (or were removed) airflow wouldn't "leak" through that opening, and keep working as designed 07:45 the other four "modules" you see in the rack are SCSI dumb disk trays (DASD arrays), like the 7311-D20 (which I remember the code, but not if they're compatible with this specific system). Three SCSI external cards like this server has are for three of these arrays per server (the rack has two servers and six disk modules like these. The two SPCN serial ports on 10:20 are, amongst other things, to connect to arrays like these! 18:35 "I wonder if I can just reset it to 'admin'" that was pretty much the standard procedure if you wanted to be nice to the other technicians who'd eventually service this machine :D 25:21 "Remote IPL" is probably one of the most useful features in pSeries servers, you could install a Network Install Manager (NIM) software in one of your AIX servers, import the boot media into it, and use it to "serve" remote install via BOOTP (including something akin to kickstart configs/answer files) 26:39 if you can't get the hard drives working but want to give this a try on the cheap, you can try to get another server running iSCSI (TrueNAS works well enough for this) to serve up remote volumes that you can use. I never tried this on p520 machines, but I've done it a few times in p550s and p570s successfully! 27:30 there's a few of those disks *with interposers* on ebay right now! Search for "97P3030" or "ibm 4326" for a few results (though they cost a preeeetty penny, sadly) 28:30 IBM HMCs are intel "x-server" servers with hardcoded hardware models, which conveniently can be handwritten in a VM configuration :) Hope any of it is useful!
Thank you so much! This is incredibly useful. I have a pair of disks on the way from eBay which should have the interposer boards on them, so fingers crossed that I'll be able to get that going. I actually spent Sunday afternoon trying to setup an iSCSI instance on my TrueNAS server without luck, but it's my first go at that so I suspect I was doing something wrong. I was wondering what the deal with those x-server HMCs were, that makes a lot of sense. I'll definitely have follow up videos on this thing, it was super fun to learn about and use. The HMC should be on the way and with some disks we'll be in action.
Those fan bearings looked shot. Good fix. Good to have Deox-it on hand to get in there. I've used Evaporust on other parts, but never something enclosed like that. I'll remember to just give the oil a shot.
The HMC is just a regular x86 server or PC (x3500 series, deskside tower, or a rack drawer with an IBM ThinkPad mounted in it) running a dedicated OS image. You could probably find that image somewhere and install it on just any system you have around.
The HMC (Hardware Management Console) is used to help slice and dice CPU cores from 10th of a core to a full or more core to control more racked systems connected together for LPAR or WPAR use. The OS is very cut down version of Redhat LINUX. Even at HMCroot level it’s still restrictive. To gain full root access, you had to contact IBM directly for the password. Even then you had to give them a very good reason to have it. Was an AiX admin for financial institutes for the past 15 years. Now as IBM have bought and own RedHat, it’s all about RH LINUX enterprise and cloud these days.
I'm amazed that the SCSI backplane is 0.1" pitch like the old PS/2 DBA-ESDI drives - IBM had different backplanes (under the standard 'Type' nomenclature) for the PC Server lines that preceded the 'Netfinity' servers (5xx-series became 5000 models, 7xx became the 7000 models). Netfinity may have had a variety of trays for the line (with the blanks lacking compartments like here), but the drives themselves were SCA. After PS/2s, IBM started (like everyone else) using Adaptec controllers, which meant differing SCSI ID conventions (previously, the drive controller was in the highest-numbered slot, and the boot drive had the highest SCSI ID). Actually, the system reminds me more of an AS/400...
This machine is my first "enterprise IBM" experience (basically anything other than their PS/2 and x86 consumer gear). I have a couple drives coming from eBay which hopefully fit that backplane and I'll post a follow up video. Spent a ton of time this afternoon trying to get the p5 to recognize a drive via one of those SCSI controllers and a converter board with no luck (even in the AIX installer). No luck with iSCSI either. I have to say I'm honored you're commenting, I've watched a ton of your videos! I've never used AS/400 but I'd love to hunt down an appropriate machine from the 90s to run that on.
@@clabretro - The AS/400 line (when the change was done from beige to black) had the iSeries, which were x86 CPUs in a Netfinity server structure. A huge part of the Netfinity design was hot-swap drives, but of course, an AS/400 has never been implemented that way. Later AS/400s moved from x86 back to the IBM Power CPUs, like the model has here. As a friend said when I showed the video, "IBM had different trays for every line.". Again, that 0.1" old PCB-like pitch throws me when older backplanes with SCSI-2 drives were better. SCA drives were on the scene by that time. But IBM like having different trays. I recently found a batch of trays for my Netfinity 7000 on eBay - so hopefully what you found will work for you...
(22:33) Try opening the drive with the emergency eject pinhole on the front. There's a chance a ribbon cable on the inside was ripped as I remember buying a used IBM ThinkPad R51 that had a similar drive but the higher end DVD multi model and it turned out to not work becsuse the ribbon cable had completely ripped. (23:38) That reminded me of when my dad worked at a business school in the early 2000s and they, at the time, ran Windows ME. I think it was because they were a small business and couldn't afford to run Windows 2000. Our 2 identically configured custom-built PCs we had at home ran Windows 98 SE.
It ended up being that the server needed to be fully powered on for the drive to work (oops). I ran 98 SE for a long time at home before finally switching to XP.
I fixed a lot of fans that way. just had to do all the fans on my only 1 year old SAS backpain they where ball barring too! I think the SAS hot swop bays was stored in a Real damp environment.
I also have had problems with those SCSI adapters. What I have found to work is using a HP DL380 G4 server hard drive backplane PCB. Works fine with a Adaptec 39160 PCI-X card (can work with regular 32 bit PCI slots). It needs custom wiring into the power connector so the 12V and 5V goes to the right places.
The hardware that communicates prior to the system being 'powered on', and the early boot sequence items that communicate once the system is 'powered on' very likely want different status lines asserted on the serial port. This can usually be puzzled out with a serial break out box to determine which lines need to be tied together, etc.
This was fun! Bummer you didn’t get further into it. Can’t wait to see what comes of that hardware interface controller. IBM always had some super interesting bleeding edge stuff, which is also kinda too bad in that a lot of it never really took off. The more proprietary these companies made their stuff, the less it seems really took any real hood in the enterprise realms. Save for AS400, anyway. 2023 and my company, a major national financial institution, which is still using as400 trash lol.
Old RS/6000s used processors like PowerPC 601 and 604, which were also used by Macs. And according to Wikipedia, POWER3 and later processors also use PowerPC instruction set, or Power ISA in newer processors. And in fact, some IBM PowerPC workstations display Apple copyright message on startup, since PowerPC was created by AIM alliance: Apple-IBM-Motorola.
I love the Color coding on IBM hardware: PSUs have red tabs, meaning hot swappable. FSP has blue tabs, meaning system has to be shut down and powered off first.
You also need to power the external drive. That molex plug needs 12v at least and probably 5v for the disk controller. I've used a p520 with a SUN external hdd box. Ghetto.. I know but got me out of a jam at work. Also those drive interposer boards, few of the AS/400's from the period used the same thing. HMC you can also install to a VM with some fiddling. Looking forward to the second. I've just upgraded my home lab to p7+'s.
I forgot to mention in the video, I had the board powered via molex and the drives were spun up, just couldn't get the p5 to see them. Other folks have commented that the AIX installer might have recognized it though, so I'm going to try that next.
I forgot to point out in the video, I did power the board with the molex connector but no luck. Good news is a couple drives showed up which have the right boards, so I'll have a follow up!
Nice, the HMC looks like it uses a very similar case to my 1U p505 (9115-505) server! I've been through a similar journey; my machine is also POWER5+ dual-core, but i ended up installing Debian 11. I tried installing 12 but it locks up booting the install media. As you can imagine the server isn't running a lot of the time, being 1U and power efficiency not a priority for these machines.
I had similar serial issues with some devices, the only solution I found which worked, was building a custom serial adapter, with non FTDI based USB->Serial chipset. It isn't terminal issues, I dont know exactly why those issues happen, but I suspect data isn't "clock" aligned to chipset expectation. Old serial systems don't seem to suffer from it, as they buffer stuff properly, but FTDI stuff seems to get confused.
I remember purchasing 2 servers in an auction for 450$. They were labeled as "non functional". People at the auction called me stupid ! Fortunately, i'd checked them before hand and they were still on 24h replacement warranty ! (LOL) the next day, I had 2 brand new replacements !
Installed VIOS and AIX on these machines so many times and I don't know if I've ever had the install process be the same any of those times. If any machine is going to have ghosts, it would definitely be an IBM P5.
@@clabretro Good luck. They're fascinating machines to tinker with, but I absolutely do not want to ever be in a position of trying to support anything older than a P7 in a production environment ever again. The very first thing I was tasked with doing when I started into IT was to take a walkthrough my friend had written years prior, take a spare P5 in the lab, and install VIOS and a couple of AIX VMs onto it. Luckily our HBM had sufficient capacity that I could upload the VIOS and AIX images to it, so I didn't have to go out to the lab and insert discs or anything. But it still took me nearly a week and the second time I tried to do the same thing, following the same instructions, I still had to troubleshoot a failed shared ethernet adapter in VIOS. I could better understand the appeal of ESXi after that process.
Similar control panels are still on Power systems, and optional on Lenovo systems. Surprised this has commodity RAM instead of something proprietary like their later P systems. Power6 was an in order CPU
Every time I have re-oiled bearings, they haven’t worked in the long term. The best solution seems always to be to replace the bearings - they are very often standard parts, cheap and easy to source.
Yeah it'll be interesting to see how long it lasts. This machine won't be on all the time though, so not too big of a deal to re-oil in the future if I have to.
@@clabretro I wonder if the concern is dried on thermal paste would potentially cause damage to the dies or tiny capacitors on top of the MCM when removing the heatsink.
Nice old IBM logo with a very sweet case. Externally, i love this style. Do you think it's possible to convert this system to a modern NAS? Keeping the HDDs bays for real use, same for the DVD drive, maybe having the front display working with luck, but mostly, rearrange the internal to fit modern motherboard/psu etc?
That IBM logo in the craigslist ad I saw this on was one of the reasons I went to check it out. You could definitely figure out a way to retrofit modern components in there, it's huge. I was thinking it'd be a cool project to do that with one of the deskside tower versions.
@@clabretro I'm currently running my Nas on a Node 304, I don't have rack stuff, my pfSense box is a M720q from Lenovo and I've just a small ubiquiti switch. But! As I see the first video and see this monster with all the front bays, I immediately think about a very cool Nas project. But not easy to find on ebay. And mostly expensive.
I want to know how much time you spend reading documentation, and making it from shipping, to running great? I was planning doing something similar, but have no experience with racks, or server hardware... Would be great if you make video, that walk us through configuring basic setup - what do we need to even consider doing such step - what kind of connection(cable), and what programs you use to communicate via ( for example ) serial port etc. ;-) What to look for when considering buying such equipment? (I heard of some licencing that can prevent you from doing something with the server - can you somehow buy that licence, or do something else to make things work?)
Good question... with this older stuff, I spend a lot of time reading documentation and getting it to work (I try to get most of that part on camera). I wouldn't really recommend these older servers as "full time" homelab units, they're far out of date and very power hungry, but they're certainly fun to play around with. A lot of the answers all depend on the server in question, but I'll think about all of it, maybe I could make an "overview" video of some sort that would be generic enough for folks to learn about servers in general.
I just happen to have around half a dozen of those funky IBM disk tray/interposer boards that I have no use for. I think they're the exact part you need; I can send photos and part numbers if you're interested.
Wow. That drive tray cages. Single cast metal piece. Wow. These days, they are usually 2/3 separate cast pieces with some rivets, pins or screws. This one is build like a brick. Still, wish there was a common standard for drive cages.
I would think it's possible as long as there are power based distros, but maybe things like the disk controllers or any other proprietary bits would cause headaches.
Not a dumb question, I forgot to mention it in the video. I did power it via the molex cable and the drive was spun up, just couldn't get the p5 to detect it. I have two drives on the way which hopefully have the right connectors 🤞
Based on the ASMI this IBM server was actually being used by IBM before you purchased it. The IP in the ASMI was 9.5.32.198/24 (subnet is owned by IBM), and for some odd reason, I guess they have never heard of NAT seeing as how the IP address is publically routable and statically set.
Ahh that was actually a screenshot from some IBM documentation, I filmed that before I actually got the ASMI working on my own p5. Very good catch though, that's interesting! Clearly they did their own doc creation on their own subnets.
Nice video. Also very cool hardware. Perhaps I missed something in your video, but you do not seem to have any power for your external harddrive. SCA does provide power on the connector, but your SCSI controller cannot provide that, you need to provide external power. There is a molex power connector on your converter board.
I forgot to show it specifically in the video, but I did power the adapter board via molex. Still no luck. I have a couple drives on the way which should hopefully work 🤞
You're right, I forgot to point out in the video that I was powering it with the molex connector, so the drives did spin up but still no luck. I have a couple drives in from eBay with the right boards so hopefully those will work fine 🤞
Me and my son has two system X servers. can't remember what model number. Decked out with 128gb of ram and dual xeon procs. You know they are old due to them only have a total of 12 cores. well mixture of cores and Hyper-Threads. One I use for virtual machines and my son uses his for Minecraft servers and stuff.
I ran these servers in the financial services industry for over a decade. They are built like tanks and are rock solid. The last model I was running was a P9 before I moved to an organization that no longer used them.
Very cool. I obviously haven't done anything serious with them, but everything I read online also points to these being totally solid machines.
@@clabretro at 2:25, down at T9 looks like a USB 2.0 Type B port, idk
That will explain why bearings on the fans were so bad. It is hard to find so worn out fans in x86 servers. Their lifecycle is much shorter.
@@clabretroif you really want to have some fun (and have a serious power source) find an IBM blade center chassis with some JS22 blades in it and a SAN. These were monster P series setups.
@@TheJonathanc82 If you're looking there is also "Flex System" which was the blade center for power series (could also put x86 blades in it).
AHHH the memories. I worked on these boxes for 28 years, retired last year. You have earned a follow.
Ah awesome! And thank you!
IBM-POWER wow, I love it! Thanks. This non-s86/x64 CPU architecture is quite unique. Still alive with new products after so many years, whereas MIPS and SPARC have disappeared from the server market. 30 years after, IBM-POWER is still alive today, with POWER-10, that's very impressive.
I worked on these systems for years, until around the time this machine came out. If you can get a working HMC, you can do some cool things. You can split up your machine into LPARs (logical partitions) which are sort of like virtual machines but run on the hardware via a hypervisor instead of software translation. You can then split up your resources, like tenths of a CPU to each LPAR. Not much to split up, however with only two CPU cores. This video brings back a lot of memories. Some of the earlier high-end rack-mounted RS/6000 systems took about 45 minutes to boot to the SMS prompt, then you had to be there to hit the key to get into SMS so you could install the AIX OS. That was painful.
When the HMC shows up I'm definitely going to try LPARs, seems very cool experiment with.
That's wild about the RS/6000 boot time, amazing.
@@clabretro Some other "tricks", although I don't know what is supported on this lower-end, older system.
1. On some of the System p machines, you could upgrade the system firmware (similar to the UEFI/BIOS on PCs) without rebooting. I know you said you don't want to upgrade the system firmware, so that might not be something you would want to try.
2. Some of my colleagues were doing something where they could take a running process and transfer it to a different machine where it would take over starting with the next instruction. Both machines had to be connected to the same storage device. This also had an IBM acronym, but I forget what it was called. If this is at all supported on this machine, you could potentially attempt the same thing with LPARs.
I used to refer to these features as "witchcraft".
@@clabretro Interesting you went with a rack-mount HMC. I have/had a mid-tower model ('had' in the sense that it eventually wouldn't recognize all it's memory slots, and I was stuck at 3G memory, so I turned it into a Ryzen 5 with some crap MSI board)
That is all definitely what I would call witchcraft 😄
If you find an old HMC, you can get a root shell via man or less by going in to vi mode then executing a shell from there. They are locked down by default. It was fixed in newer versions, i forget which, but you might be lucky. The management UI is actually very very good (the old one at least).
You aren't helping disuade me from picking up some old enterprise equipment for the home lab.
But i dont have a home lab at all! Living vicariously through you haha
Haha well it doesn't have to be a huge beast like this. Any computer will do!
That system just looks so slick. I know enterprise gear usually doesn't get a ton of love, but youve gotta admit that IBM made some darn attractive machines.
Totally agree, meant to actually mention that specifically in the video. The IBM equipment always look so cool.
Welcome in the magic world of IBM-I platform (formerly known as AS/400, eSeries, Pseries, Power, Power-i). I ran that system from 16 years! The 5.25 bay is for tape backup unit. Fiber channel normally will be used for a SAN (most machine is diskless). Thanks for this time travel
Thanks for watching! I'll have more coming... I have some drives which will hopefully work in that p5 as well as a physical HMC to mess around with.
The downside to these proprietary enterprise servers is they're built to be about as user unfriendly as possible - unobtanium interposers for disks, arcane menus on the display that are in numbers instead of text, other similar proprietary stuff. That way the customers are supposed to buy the yearly service contract and let IBM take care of all that "hard stuff". Pretty sure they make more money off the service contracts than they do off of initially selling the machine.
Love the old enterprise gear! And cool to see you getting this stuff actually working.
I love this stuff and I don’t really know why, I’m not old enough to have worked with this stuff.
The videos are super well done, audio levels are lovely and even, and you shoot really well. Keep it up, I love these videos.
Thanks!
For such a small channel you make great videos!
Thanks!
Gotta love enterprise stuff. It’s over engineered but super simple to understand.
Your channel is becoming one of my favourites. Nice niche, enterprise hardware from the late 90s and early noughties.
That web interface hasn’t aged a day, what a beauty to behold!!!🥰
Yeah, right! I do not miss frames at all, they make any web based interfaces look so dated.
Yess! A perfect saturday evening with some clabretro and old IBM servers :)
Just found your channel a few days ago and can now proudly say I’ve binge watched all your videos. Amazing content, been having a blast watching it. People are probably tired of hearing me talk about your channel hahaha
haha thank you!
The SCA to SCSI-50/SCSI-68 pin card that you use requires external power with that molex connector on the card. It supplies power to the hard drive so that it can be seen. I have one of them in my 43P-150 to connect a SCA hard disk to the 68pin UltraWide SCSI channel in that machine.
Those POWER5 machines were until recently the backbone of the FAA EnRoute system (ERAM). They have been replaced in the past few years with HPE Proliants running RHEL. Those P5s were used as deskside machines for air traffic controllers and the backend servers.
I forgot to mention in the video, I did try it with the molex connector. Drive spins up but I can't get the p5 to detect it yet, probably just the wrong jumper settings or something.
That's amazing about ERAM -- doesn't surprise me these things ran for so long though. Super high build quality.
@@clabretro depending on where you are looking, from memory they won't show up in the menu you were looking at for boot device order unless they have a boot block from an existing installation. Check if the disk shows up in the CD install menu as an disk you can install AIX on.
@@clabretro those 10-15k RPM drives also run screaming hot, they're expecting to be in the enclosure with the air being pulled pass them. If you run them externally may want to have a fan blowing on it. I killed two 36GB raptors once when I left the front fan unplugged when doing some work.
@@colinstu very good call out, I also ended up pointing a fan at them while I was testing. The Sun 10k drive got super hot.
I never comment on videos, but this stuff is gold. I had no idea I was into retro enterprise obscura! Keep up the good work!
Thank you, and thanks for watching! More to come.
Been so looking forward to this video after your last. These old enterprise servers are a gold mine to a young IT Data center tech! Keep them comming.
Thanks!
logg, logg, it's better than bad; it's good! everyone wants a logg, you're gonna love it, logg
I was thinking "the p5 isn't that old, I worked with the 520"... then I realised I'm getting old. Thanks!
Haha it pains me to call these "vintage"
I started working on Iseries (aka As/400) from 1998 and still do to this date (though now in Application support since 2017). Moving to IBM P10 next year. AS400 lives on!!! :)
the statuss code are IPL (Initial program load) codes. For HMC access, 'IBM I Access client soltion' pgm for 5250 emulation (interactive jobs/sessions) and HMC tools, interface access.
Your content is great my man, really looking forward to this IBM hardware.
Thanks! More to come after some drives show up.
Ah Yes Quality computing from the early 2000's, Great Video!!
Thanks!
Always like learning about these type of server hardware, never been exposed to one except seeing their ads in my dad's Business Week or The Economists magazines. Seeing how you get it up and running is fascinating.
I worked on these at Ford Motor - only the big refrigerator power5 main frames connected to IBM Shark storage. I was responsible for carving & deploying LPARs running Linux and AIX. and one of my best friends worked at ebmpapst (maker of those fan motors).
Very cool! Will have another video on the p5 with some LPAR action eventually.
I really want one of these with AS/400 (er... IBM i) licensed and running.
Loving the little community that's being built up around the channel! Awesome to see the subscriber number jump every time a video comes out haha
Haha thank you!
I still use these in the FAA. Currently going under tech upgrades. I hate these things so much 😂. The boot process is SO SLOOOOW.
Haha that's amazing.
The reason for the long boot process is that the POWER5 processors cannot boot themselves. Rather another chip has to initialize everything due to the GX bus calibration stages upon boot. This is due to the GX bus running a division of the processor speeds vs. using a multiplier based off an external system clock. This required synchronizing clocks between all the processor dies. Note that the POWER5 die contains two cores, IBM would often put two dies into a single package for a quad core per socket. The speeds inside the package would be different than jumping out to an external socket which added another layer of synchronization complexity as they ran at different rates. While you didn't get a quad core model, the POWER5 does support SMT so you can run four threads simultaneously on this box.
While the p-series ran AIX, IBM's Unix offering, it also supported OS/400 aka i5/OS aka IBM i. Previously IBM had two related but specialized versions of POWER and PowerPC that merged together with the POWER4. This is also where IBM started to add specific features for their PowerVM hypervisor in hardware, though with only two cores and four threads with 4 GB of RAM, virtualization is going to of limited usage.
The 1994 copyright stems from the system's CHRP roots to run multiple operating systems (which included MacOS at one point in history!).
Very interesting, thanks for all that additional info. These Power machines are fascinating.
4:50 - Those "flaps" are one-way valves for airflow. If the fan stops working (and providing pressure) these flaps will close and prevent the other fans from circulation air through it.
Correct! A few other folks pointed this out to me, totally makes sense.
Love your channel! Finally a well done, long format, bag of awesomeness on RUclips. Keep it up please 😊
Thank you! More to come!
Been waiting to see this!!!
Awesome job dude!
Thanks!
@@clabretroI have logg
Also everything turned so cursed with the external scsi and usb drive.... 😅
Haha I know, I was pushing my luck. I'm sure there's a way to get it working, just haven't figured it out yet.
@@clabretro haha. Will maybe do some research. See if I can somehow find a source for the interposer board online...
Look at all those IBMs! What a cool wallpaper you could have...
well, this was a walk down memory lane. great video. happy i found your channel.
Thanks!
That brings back memories. Around 2012, we salvaged a bunch of these machines for AIX testing. I've lost quite a few days troubleshooting error codes, only to discover randomly the RAM modules were between half dead and completely dead... Fortunately, secondhand RAM for these was cheap on ebay, and after that it was all good. I even install Gentoo for shits and giggles on one of the spares. I also quite liked our naming convention: Flower, Point, Austin because Flower Power, Power Point and Austin Power(s).
haha nice!
I really enjoyed plans B and C for trying to get the hard drive running, much of that same pain can be had in the industrial automation field! I've always wondered the differences between IBM System p and the IBM System i that I learned to program on in college are. Keep us posted with this IBM project, can't wait!
Trying to hunt down an AS/400 system myself.
I did pick up some drives, a follow up + (hopefully successful) AIX install is on the list of upcoming projects!
I'm always on the hunt for an AS/400 as well, hard to find.
My father did CAD engineering on a 6000 for a long while (after hand-drafting). I was kinda bummed when he said they switched to PCs for Catia V5. Still want to get one some day... I need some IBM RISC in my life!
Oh yeah, it'd be awesome to get one of those IBM RISC machines.
@@clabretro Next on my list is a POWER machine of some sort, so I'll have to keep an eye out of something like this too!
Been waiting for this one since the Craigslist haul video 😁
haha me too, this was a fun one to learn about.
Man this is an old box. Still rock solid..❤
IBM does love their 3 letter accronnyms for different systems and components. Just as they also like their 4 digit codes.
I had a power 7 720 it was fun running IBM i on it along with aix. However let it go to the next homelab / tinkerer. Very robust and overbuilt to exceedingly awesome standards. Very quiet too after the full boot process.
The plastic thingies underneath the fans are there to prevent the airflow going the wrong way when you remove the fan.
The air pressure from other fans would make air flow thru that hole otherwise.
So they are not "dust filters"
You're right, a few other folks pointed that out too. Totally makes sense.
IBM default password were usually set to abc1234 out of the box. Unfortunately CPU cores had to be paid for via IBM licensing. The hard drives the P5 require something called SAS drives.
Luckily all the cores on this guy are unlocked. A couple drives just showed up from eBay with the right boards, so I'll have a follow up video!
@@clabretroLook forward to it. Remember that the HMC uses it’s own logins such as hmcuser and hmcroot. Usually preset Pword of abc1234. There is a root login and password. But they were kept by IBM and no one had access to them except IBM. HMCroot will give you enough for creating LPARs, adding RAM and CPU… But it’s very limited. The base O/S is based on Red Hat Linux. You could run scripts from CSM systems to contact the HMC, but HMCroot would only allow for so much… 👍🏻
The HMC is most likely an IBM X3550 that runs an HMC image and a special version of IMM.
Tried converting a normal X3550 M4 to a 7042 HMC some years ago, fun times.
IBM at some point put ThinkPads as local HMCs inside larger POWER mainframes, always wanted one to play around
yup! the Z series mainframe I worked on for a while had a Thinkpad T420s (iirc) as its HMC.
I've seen pictures of those thinkpads on the Z series, cool to know what they're for now!
@@francistheodorecatte Close but no cigar. The Thinkpads used in the Z mainframes was the SE (Support Element), the Z HMC would have have been some flavor of System X tower or x3550 M4 or Trenton (depending on the vintage). Again, depending on vintage the SE would have been a Thinkpad T60, T61, T500, T510, T520 or T530. The T60 was used on the z9, for example. System P did not need the SE, they leaned on the FSP and HMC for all of their system management functions.
Great video! Link added to our daily newsletter about IBM Technologies!
Thanks!
If you struggle to find a working harddrive I think you may be able to use iSCSI from a Linux or Windows server or NAS just to get further?
Yeah that's a good idea, I'll end up trying that if I can't hunt down a hard drive. I just ordered some on eBay which hopefully have the right connector.
Man I have only worked at 2 places with these and it was everyone's favorite servers because of the few times we had to work on them it was insanely easy and we didn't fear power down on them.
It seems really well built. The console menu is super responsive too.
I have a Power720 Express that I use as well! The hardware is definitely archaic (and the IBMi software even more so), but these things are second to none in build quality! You'll also be happy to know that the ASMI web interface hasn't changed a bit. XD
haha funny to hear ASMI is the same
Ahhhh those are fun machines! I went back over the video, and wrote down some notes on bits I remember from working with them:
03:03 the rack indicator was an extra set of cables that came if you bought a rack with a bunch of those, so it'd show not only which machine to service (there's already an ident light besides it for that), but also which rack in a row needed to be opened! I never saw this installed on a customer's premises
04:00 that huge connector is an IBM GX+ connector, for "high-speed" straight-to-the-CPU, cross-system communication.
04:50 the slots, just like the blanks up front and through the machine, are to keep the airflow properly contained. These machines had very well-designed cooling solutions, and in this case if the fans stopped spinning (or were removed) airflow wouldn't "leak" through that opening, and keep working as designed
07:45 the other four "modules" you see in the rack are SCSI dumb disk trays (DASD arrays), like the 7311-D20 (which I remember the code, but not if they're compatible with this specific system). Three SCSI external cards like this server has are for three of these arrays per server (the rack has two servers and six disk modules like these. The two SPCN serial ports on 10:20 are, amongst other things, to connect to arrays like these!
18:35 "I wonder if I can just reset it to 'admin'" that was pretty much the standard procedure if you wanted to be nice to the other technicians who'd eventually service this machine :D
25:21 "Remote IPL" is probably one of the most useful features in pSeries servers, you could install a Network Install Manager (NIM) software in one of your AIX servers, import the boot media into it, and use it to "serve" remote install via BOOTP (including something akin to kickstart configs/answer files)
26:39 if you can't get the hard drives working but want to give this a try on the cheap, you can try to get another server running iSCSI (TrueNAS works well enough for this) to serve up remote volumes that you can use. I never tried this on p520 machines, but I've done it a few times in p550s and p570s successfully!
27:30 there's a few of those disks *with interposers* on ebay right now! Search for "97P3030" or "ibm 4326" for a few results (though they cost a preeeetty penny, sadly)
28:30 IBM HMCs are intel "x-server" servers with hardcoded hardware models, which conveniently can be handwritten in a VM configuration :)
Hope any of it is useful!
Thank you so much! This is incredibly useful. I have a pair of disks on the way from eBay which should have the interposer boards on them, so fingers crossed that I'll be able to get that going.
I actually spent Sunday afternoon trying to setup an iSCSI instance on my TrueNAS server without luck, but it's my first go at that so I suspect I was doing something wrong.
I was wondering what the deal with those x-server HMCs were, that makes a lot of sense.
I'll definitely have follow up videos on this thing, it was super fun to learn about and use. The HMC should be on the way and with some disks we'll be in action.
Oh wow that is a beautiful machine
Been waiting for this video, its great!
Thanks!
I smell a t-shirt idea with that “you have logg” message. 😂
you might be right 😂
Those fan bearings looked shot. Good fix.
Good to have Deox-it on hand to get in there. I've used Evaporust on other parts, but never something enclosed like that. I'll remember to just give the oil a shot.
They're still not perfect but the fix helped a ton, definitely extended their life.
I suppose the transparent plastic blades under the fans are to make sure the air doesn't come from here in case a fan fail.
It's a kind of valve.
I believe you're right, that would make the most sense.
This thing is beautiful
Those front panel buttons are even-more Parma Violet-y than the NA SNES controller buttons!
Nice video and machine. Good work!
thanks!
The HMC is just a regular x86 server or PC (x3500 series, deskside tower, or a rack drawer with an IBM ThinkPad mounted in it) running a dedicated OS image. You could probably find that image somewhere and install it on just any system you have around.
The HMC (Hardware Management Console) is used to help slice and dice CPU cores from 10th of a core to a full or more core to control more racked systems connected together for LPAR or WPAR use. The OS is very cut down version of Redhat LINUX. Even at HMCroot level it’s still restrictive. To gain full root access, you had to contact IBM directly for the password. Even then you had to give them a very good reason to have it. Was an AiX admin for financial institutes for the past 15 years. Now as IBM have bought and own RedHat, it’s all about RH LINUX enterprise and cloud these days.
I'm amazed that the SCSI backplane is 0.1" pitch like the old PS/2 DBA-ESDI drives - IBM had different backplanes (under the standard 'Type' nomenclature) for the PC Server lines that preceded the 'Netfinity' servers (5xx-series became 5000 models, 7xx became the 7000 models). Netfinity may have had a variety of trays for the line (with the blanks lacking compartments like here), but the drives themselves were SCA. After PS/2s, IBM started (like everyone else) using Adaptec controllers, which meant differing SCSI ID conventions (previously, the drive controller was in the highest-numbered slot, and the boot drive had the highest SCSI ID).
Actually, the system reminds me more of an AS/400...
This machine is my first "enterprise IBM" experience (basically anything other than their PS/2 and x86 consumer gear). I have a couple drives coming from eBay which hopefully fit that backplane and I'll post a follow up video. Spent a ton of time this afternoon trying to get the p5 to recognize a drive via one of those SCSI controllers and a converter board with no luck (even in the AIX installer). No luck with iSCSI either.
I have to say I'm honored you're commenting, I've watched a ton of your videos!
I've never used AS/400 but I'd love to hunt down an appropriate machine from the 90s to run that on.
@@clabretro - The AS/400 line (when the change was done from beige to black) had the iSeries, which were x86 CPUs in a Netfinity server structure. A huge part of the Netfinity design was hot-swap drives, but of course, an AS/400 has never been implemented that way. Later AS/400s moved from x86 back to the IBM Power CPUs, like the model has here. As a friend said when I showed the video, "IBM had different trays for every line.".
Again, that 0.1" old PCB-like pitch throws me when older backplanes with SCSI-2 drives were better. SCA drives were on the scene by that time. But IBM like having different trays.
I recently found a batch of trays for my Netfinity 7000 on eBay - so hopefully what you found will work for you...
Really interesting. It's been fun to contrast these against Sun machines of the era (which, for example, had mostly standardized around one tray).
(22:33) Try opening the drive with the emergency eject pinhole on the front. There's a chance a ribbon cable on the inside was ripped as I remember buying a used IBM ThinkPad R51 that had a similar drive but the higher end DVD multi model and it turned out to not work becsuse the ribbon cable had completely ripped.
(23:38) That reminded me of when my dad worked at a business school in the early 2000s and they, at the time, ran Windows ME. I think it was because they were a small business and couldn't afford to run Windows 2000. Our 2 identically configured custom-built PCs we had at home ran Windows 98 SE.
It ended up being that the server needed to be fully powered on for the drive to work (oops). I ran 98 SE for a long time at home before finally switching to XP.
those plastic things under the fans are for backflow prevention if one of the fans dies/gets removed so airflow doesnt get wrecked
Ah, that makes perfect sense! Thank you.
I fixed a lot of fans that way.
just had to do all the fans on my only 1 year old SAS backpain they where ball barring too! I think the SAS hot swop bays was stored in a Real damp environment.
I also have had problems with those SCSI adapters. What I have found to work is using a HP DL380 G4 server hard drive backplane PCB. Works fine with a Adaptec 39160 PCI-X card (can work with regular 32 bit PCI slots). It needs custom wiring into the power connector so the 12V and 5V goes to the right places.
The hardware that communicates prior to the system being 'powered on', and the early boot sequence items that communicate once the system is 'powered on' very likely want different status lines asserted on the serial port. This can usually be puzzled out with a serial break out box to determine which lines need to be tied together, etc.
Thanks, that's useful!
Reminds me of the old iseries IBM computers we used to use.
This was fun! Bummer you didn’t get further into it. Can’t wait to see what comes of that hardware interface controller. IBM always had some super interesting bleeding edge stuff, which is also kinda too bad in that a lot of it never really took off. The more proprietary these companies made their stuff, the less it seems really took any real hood in the enterprise realms. Save for AS400, anyway. 2023 and my company, a major national financial institution, which is still using as400 trash lol.
lol it's always funny to hear how pervasive IBM's stuff is in the world (both old and new).
Old RS/6000s used processors like PowerPC 601 and 604, which were also used by Macs. And according to Wikipedia, POWER3 and later processors also use PowerPC instruction set, or Power ISA in newer processors.
And in fact, some IBM PowerPC workstations display Apple copyright message on startup, since PowerPC was created by AIM alliance: Apple-IBM-Motorola.
I love the Color coding on IBM hardware: PSUs have red tabs, meaning hot swappable. FSP has blue tabs, meaning system has to be shut down and powered off first.
You also need to power the external drive. That molex plug needs 12v at least and probably 5v for the disk controller. I've used a p520 with a SUN external hdd box. Ghetto.. I know but got me out of a jam at work. Also those drive interposer boards, few of the AS/400's from the period used the same thing. HMC you can also install to a VM with some fiddling. Looking forward to the second. I've just upgraded my home lab to p7+'s.
I forgot to mention in the video, I had the board powered via molex and the drives were spun up, just couldn't get the p5 to see them. Other folks have commented that the AIX installer might have recognized it though, so I'm going to try that next.
We have a couple clients in manufacturing that still run P720 for custom apps
Interesting! I'll bet there's a lot of this old IBM gear still running out there.
With the SCSI adapter board shouldn't you also power the drive with the molex connector?
I forgot to point out in the video, I did power the board with the molex connector but no luck. Good news is a couple drives showed up which have the right boards, so I'll have a follow up!
I also have a p5 like yours, yes normally this model is pleasantly quiet for a server. Hopefully you have a lot of licenses on the hardware.
From my understanding the blank dim slot covers are just to keep dust out not improve air flow.
Oh that would make a lot of sense!
Nice, the HMC looks like it uses a very similar case to my 1U p505 (9115-505) server! I've been through a similar journey; my machine is also POWER5+ dual-core, but i ended up installing Debian 11. I tried installing 12 but it locks up booting the install media. As you can imagine the server isn't running a lot of the time, being 1U and power efficiency not a priority for these machines.
nice! yeah this one won't be on very often haha.
Only installed these from DAT-tape, but that's pretty straight forward.
This one is low end server, the real high end P5 consists from 2 servers connected by x ribon CPU cable :)
I had similar serial issues with some devices, the only solution I found which worked, was building a custom serial adapter, with non FTDI based USB->Serial chipset. It isn't terminal issues, I dont know exactly why those issues happen, but I suspect data isn't "clock" aligned to chipset expectation. Old serial systems don't seem to suffer from it, as they buffer stuff properly, but FTDI stuff seems to get confused.
Nice that it has a built in SMS, the weird SCSI connectors must be for Sega Master System games!
Haha!
I have a lot of stuff that needs rehousing for these machines. HDDs and other stuff. How do I reach you?
Hey, you can reach out to the email address in the channel's about page.
@@clabretroI’m on mobile atm. I’m not seeing an email address. Could just be that I’m on mobile.
it's just clabretro gmail@@scooter4196
I remember purchasing 2 servers in an auction for 450$. They were labeled as "non functional". People at the auction called me stupid ! Fortunately, i'd checked them before hand and they were still on 24h replacement warranty ! (LOL) the next day, I had 2 brand new replacements !
ha nice!
Great content my man. I made sure to subscribe.
thanks!
Installed VIOS and AIX on these machines so many times and I don't know if I've ever had the install process be the same any of those times. If any machine is going to have ghosts, it would definitely be an IBM P5.
Haha. I'll be giving it another go soon, a couple drives showed up which should work.
@@clabretro Good luck. They're fascinating machines to tinker with, but I absolutely do not want to ever be in a position of trying to support anything older than a P7 in a production environment ever again. The very first thing I was tasked with doing when I started into IT was to take a walkthrough my friend had written years prior, take a spare P5 in the lab, and install VIOS and a couple of AIX VMs onto it. Luckily our HBM had sufficient capacity that I could upload the VIOS and AIX images to it, so I didn't have to go out to the lab and insert discs or anything. But it still took me nearly a week and the second time I tried to do the same thing, following the same instructions, I still had to troubleshoot a failed shared ethernet adapter in VIOS. I could better understand the appeal of ESXi after that process.
Similar control panels are still on Power systems, and optional on Lenovo systems.
Surprised this has commodity RAM instead of something proprietary like their later P systems.
Power6 was an in order CPU
Every time I have re-oiled bearings, they haven’t worked in the long term. The best solution seems always to be to replace the bearings - they are very often standard parts, cheap and easy to source.
Yeah it'll be interesting to see how long it lasts. This machine won't be on all the time though, so not too big of a deal to re-oil in the future if I have to.
21:34 huh, wonder why not remove the CPU? Those power5 MCMs are cool as hell. Unless taking the heatsink off is fine to show it off.
Agree, was a coward though 😂
@@clabretro I wonder if the concern is dried on thermal paste would potentially cause damage to the dies or tiny capacitors on top of the MCM when removing the heatsink.
Nice old IBM logo with a very sweet case. Externally, i love this style. Do you think it's possible to convert this system to a modern NAS? Keeping the HDDs bays for real use, same for the DVD drive, maybe having the front display working with luck, but mostly, rearrange the internal to fit modern motherboard/psu etc?
That IBM logo in the craigslist ad I saw this on was one of the reasons I went to check it out. You could definitely figure out a way to retrofit modern components in there, it's huge. I was thinking it'd be a cool project to do that with one of the deskside tower versions.
@@clabretro I'm currently running my Nas on a Node 304, I don't have rack stuff, my pfSense box is a M720q from Lenovo and I've just a small ubiquiti switch. But! As I see the first video and see this monster with all the front bays, I immediately think about a very cool Nas project. But not easy to find on ebay. And mostly expensive.
I want to know how much time you spend reading documentation, and making it from shipping, to running great? I was planning doing something similar, but have no experience with racks, or server hardware... Would be great if you make video, that walk us through configuring basic setup - what do we need to even consider doing such step - what kind of connection(cable), and what programs you use to communicate via ( for example ) serial port etc. ;-) What to look for when considering buying such equipment? (I heard of some licencing that can prevent you from doing something with the server - can you somehow buy that licence, or do something else to make things work?)
Good question... with this older stuff, I spend a lot of time reading documentation and getting it to work (I try to get most of that part on camera). I wouldn't really recommend these older servers as "full time" homelab units, they're far out of date and very power hungry, but they're certainly fun to play around with.
A lot of the answers all depend on the server in question, but I'll think about all of it, maybe I could make an "overview" video of some sort that would be generic enough for folks to learn about servers in general.
I just happen to have around half a dozen of those funky IBM disk tray/interposer boards that I have no use for. I think they're the exact part you need; I can send photos and part numbers if you're interested.
I actually picked up a couple drives with the boards on eBay, if I have any trouble with those I'll reach out. I appreciate the offer though!
Wow. That drive tray cages. Single cast metal piece. Wow. These days, they are usually 2/3 separate cast pieces with some rivets, pins or screws. This one is build like a brick. Still, wish there was a common standard for drive cages.
Great stuff mate.
I have to wonder if the firmware update might fix that external SCSI issue? Just a thought.
Could be, not a bad idea. I have some drives coming on eBay which hopefully have the right connector 🤞
awesome system, and great progress getting it this far! can it run IBM's linux Z/OS ?
not that I'm aware of, unfortunately. now if I could only get my hands on some IBM Z hardware 😆
I am curious if it could run a regular Linux distro? I think Debian and OpenSuse have Power Based images.
I would think it's possible as long as there are power based distros, but maybe things like the disk controllers or any other proprietary bits would cause headaches.
Dumb question, but did you power the hard drives? That SCSI cable does not carry power, right?
Not a dumb question, I forgot to mention it in the video. I did power it via the molex cable and the drive was spun up, just couldn't get the p5 to detect it. I have two drives on the way which hopefully have the right connectors 🤞
Based on the ASMI this IBM server was actually being used by IBM before you purchased it. The IP in the ASMI was 9.5.32.198/24 (subnet is owned by IBM), and for some odd reason, I guess they have never heard of NAT seeing as how the IP address is publically routable and statically set.
Ahh that was actually a screenshot from some IBM documentation, I filmed that before I actually got the ASMI working on my own p5. Very good catch though, that's interesting! Clearly they did their own doc creation on their own subnets.
Nice video. Also very cool hardware. Perhaps I missed something in your video, but you do not seem to have any power for your external harddrive. SCA does provide power on the connector, but your SCSI controller cannot provide that, you need to provide external power. There is a molex power connector on your converter board.
I forgot to show it specifically in the video, but I did power the adapter board via molex. Still no luck. I have a couple drives on the way which should hopefully work 🤞
I dont believe that external SCSI cable provides power to the drive .. does it spin up ?
You're right, I forgot to point out in the video that I was powering it with the molex connector, so the drives did spin up but still no luck. I have a couple drives in from eBay with the right boards so hopefully those will work fine 🤞
@@clabretro Fingers crossed :)
Now you need ether an IBM or SUN Rack :)
If only!
Me and my son has two system X servers. can't remember what model number. Decked out with 128gb of ram and dual xeon procs. You know they are old due to them only have a total of 12 cores. well mixture of cores and Hyper-Threads. One I use for virtual machines and my son uses his for Minecraft servers and stuff.
....given a hardware serial connection, is that Linux box an AOpen Digital Engine box of some flavor meant for digital signage?
It's just a Lenovo M900 ThinkCentre. Really convenient with the physical serial port though.