Let's Check Out an Old Blade Server System with 32 CPUs!

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024
  • Many, many servers these days run as virtual machines -- but there was a time when virtualization was still just catching on, and companies needed physical servers to be as dense as possible. So let's look at a blade server system from around 2010 that packs 32 CPU sockets and weighs 500 pounds!
    Image of C7000 chassis will full-height blades: commons.wikime...
    ---------------------------------------­------------------------------------
    Please consider supporting my work on Patreon: / thisdoesnotcompute
    Follow me on Twitter and Instagram! @thisdoesnotcomp
    ---------------------------------------­------------------------------------
    Music by
    Epidemic Sound (www.epidemicsou...).

Комментарии • 402

  • @stevef6392
    @stevef6392 4 года назад +529

    If you let that thing roll down a hill, you could call it a blade runner. I'll crawl back in my cave now.

    • @andrewgwilliam4831
      @andrewgwilliam4831 4 года назад +18

      You'd certainly be "retired" if you came between it and a wall!

    • @k033as9
      @k033as9 4 года назад +3

      good joke

    • @longnamedude3947
      @longnamedude3947 4 года назад +7

      @@andrewgwilliam4831 Not a bad pension though, and you'd have a nice pay day if you survived the onslaught of so many blades at once.....
      Breaking News!.....
      "Man survives attack from 16 Blades, Says it was a close shave....."
      Lol

    • @kennethsrensen7706
      @kennethsrensen7706 4 года назад

      Ha ha , Joke of the day ,

    • @SeleniumGlow
      @SeleniumGlow 3 года назад +1

      Top tier pun. Cheers.

  • @mabbaticchio
    @mabbaticchio 4 года назад +165

    I remember installing one of these for a client about 15 years ago that had purchased the chassis and 8 blades. Unfortunately he bought it thinking that the power of the blades were aggregated to make a super server.
    The look on his face when trying to explain that was not possible and he is the proud owner of 8 individual dual cpu servers and not 1 server with 16 CPUS and whatever the combined ram RAM installed was priceless. He just wanted a very powerful SQL server and ended up with 8 under powered servers.

    • @kaitlyn__L
      @kaitlyn__L 4 года назад +18

      oh no!

    • @djangoryffel5135
      @djangoryffel5135 4 года назад +10

      Back in the days, the double stacked DL580, so named DL980 wasn't a thing yet, or was it?

    • @SaberusTerras
      @SaberusTerras 4 года назад +6

      @@djangoryffel5135 DL980 was a G7 offering, about 10 years ago. IBM had the 3850 series that could be cabled together into a 3950 I think a bit before that.

    • @TechieZeddie
      @TechieZeddie 4 года назад +12

      I think the best use for a blade server like this is VMWare vSphere. Being able to vMotion servers from one host to another (host being a blade) is pretty cool! I wonder if other hypervisors do this (Hyper-V, Xen, etc). Our company uses VMWare so that's what I'm most familiar with.

    • @RyanRiopel
      @RyanRiopel 4 года назад +27

      Pretty wild that somebody would put down that kind of money without proper research

  • @desertmike680
    @desertmike680 4 года назад +18

    I was a Data Center Engineer for HPE for 3 years , prior to HPE outsourcing us to Unisys. Anyway, these blade servers were so much fun to work on, especially the Gen 9 and beyond. Gen 10 was technically Synergy which included ILO5 and booted so much faster! Loved working for HPE and working on the ProLiants, Apollo's, Blades and 3par SANs.

  • @matthewstockwell1748
    @matthewstockwell1748 4 года назад +9

    Hey there 😁. The very first data centre I built utilised this exact setup. HP C7000 chassis, with the BL360C G1 blades. We also implemented the HP EVA with fibre channel. I was 17 when I did this. Now that I’m 31, my paths crossed with company who owned it. I contracted to migrate them to the cloud years later. The best bit is, I inherited this monster for FREE!! And the rack!! It now sits in my lounge room on wheels and a class top. Terrific coffee table. Keep up the good works sir, love your channel.

    • @Mystery5Me
      @Mystery5Me 4 года назад

      That thing must be an ordeal to move - I don't know anyone else who has a 490lbs coffee table!! 😂😂

    • @matthewstockwell1748
      @matthewstockwell1748 4 года назад

      Mystery 180kg or pure nostalgia haha. I’ve got a heavy tradesmen trolly for when it spring cleaning time

  • @izzieb
    @izzieb 4 года назад +40

    The videos where you show old server equipment are my favourites. While I love the retro gaming videos etc - these are far more interesting as they're something we don't see as much of on RUclips.

    • @caromac_
      @caromac_ 4 года назад +4

      They're a whole different world to normal computers.

  • @tonyshen1
    @tonyshen1 4 года назад +174

    This explains why these Xeons are so cheap today since so many of them were retired.

    • @the32bitguy
      @the32bitguy 4 года назад +12

      If you are looking for good value PCs Xeon workstations on eBay are better than most of the office small form factor computers you will also find.

    • @onecracker6064
      @onecracker6064 4 года назад +5

      lga 1366 xeons are dirt cheap except for the x5690.

    • @mrcrisplinuxuser4180
      @mrcrisplinuxuser4180 4 года назад +12

      Yep. Not bad if you need a budget gaming PC. Though DEFINITELY not the best. (These things were never made for gaming anyway)

    • @chrisconley3579
      @chrisconley3579 4 года назад +4

      It's true. I upgraded my server to a much more powerful one for cheaper than I bought my original. Some really good deals out there

    • @kaitlyn__L
      @kaitlyn__L 4 года назад +7

      @@mrcrisplinuxuser4180 not the best for pure gaming... but I imagine that, much like Ryzen, it's pretty competent at gaming while doing a bunch of other intensive stuff at the same time? Like cross-compiling for obscure architectures or streaming or ripping all your DVD collection in the background... just to name a few. I imagine they're pretty beefy for F@H too.

  • @mjoconr
    @mjoconr 4 года назад +4

    We buy second hand HP C7000 chassis all the time. We install G7 or G9 blades with drive storage blades so 8 CPU blades and 8 drive blades and use Proxmox and CEPH. Quad 10G cross-connect. Second hand you can build this for about 10K. Cheaper than anything else out there.

  • @K3lwin
    @K3lwin 4 года назад +24

    About power redundancy: it is configurable through OA/iLO, in AC Redundant mode the whole blade chassis can survive with full performance on just three power supplies. Makes sense, most of datacenters provide power to the racks from two independent power sources, so every 3 PSes would get power from the same source.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      If you're going to be racking several in a single rack, it's likely that you'll be using quite a few different PDUs, likely off of 3-phase power. In the most power consumptive racks I've configured, I had six PDUs with three dual-whip tap cans spread between the three phases. So each PSU was connected to a different PDU for maximal redundancy and for load balancing on each PDUs. Because the PDUs I had available couldn't handle more than 60A and I needed to get 4 blade servers in a rack with power overhead.

  • @Jacobhopkins117
    @Jacobhopkins117 3 года назад +7

    We still have ~2000+ C7000 enclosures deployed and I can tell you it is a rock solid platform. We have a mix of G1 to Gen10 blades deployed across them, mostly for virtualization workloads, but the enclosures themselves are tanks. HPE has officially discontinued the C-Class platform after the BL460c Gen10 and moved to Synergy as its latest bladed compute architecture. Crazy to think of piece of technology that was announced in 2006 supported new hardware up until 2018.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад +2

      The thing that failed most often were the things that HP didn't make: HDD, Cisco switches, and Brocade switches. But those were all designed to be redundant, so even if something does fail, it rarely takes the blade fully down before you get a chance to address the problem.

  • @alexscarbro796
    @alexscarbro796 4 года назад +7

    I have a c3000 as an anchor to stop anyone stealing my actual server!
    I do love the how cheap the parts are for these now. You can buy 10Gb Ethernet and Infiniband switches and mezzanine cards for pocket change. The cables cost more than the parts....so it is just as well the backplane is so well thought out....as you need very few for a local MPI cluster with Ethernet for network boot and infiniband for compute traffic. A real bargain and a great tool for learning HPC.

  • @kc9nyy
    @kc9nyy 4 года назад +41

    Still manage several of these with more recent blades. One nice thing is using HP's FlexFabric modules & SAN booting if we have a hardware failure we can just reassign the profile and reboot the failed system on a different blade even in a different enclosure.

    • @kennethgrainger1112
      @kennethgrainger1112 4 года назад +5

      I've been doing that with Cisco for a while. It's nice to see some abstraction. The big problem with these systems (HP/Dell/Cisco) is that the density drives the datacenter power/cooling requirements. 10KW in 8U, in a 52U rack = 60KW/rack, without redundancy. It boggles the mind how to get that much juice into 19". :-)

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      Oh, did you do the iLO chassis link? I set that up one time with literally zero documentation about it and I was super excited to get it up and running. You have to chain all of the connected chassis together like a token-ring network.

  • @K3lwin
    @K3lwin 4 года назад +5

    Virtualisation does not put blade servers out of favor, they are still widely used as nodes in virtualisation clusters. Rack space and power consolidation still make sense in that regard. HPE made a new Synergy lineup of blade servers specifically to be used in VMware clusters with tight vCenter integration.
    It makes more sense to put VMs on a bunch of moderately powerful nodes than to cram them all into two high-performance ones. If node goes down - you have more compute and memory room to relocate affected VMs.

    • @ThisDoesNotCompute
      @ThisDoesNotCompute  4 года назад +3

      Certainly true, but cost is a big factor as well - blade servers/chassis simply cost more than an equivalent setup of separate servers. If density is the most important factor then the additional cost could be worth it (if, for example, it keeps you from having to install/rent additional rack space). For a lot of businesses, though, density isn’t such an issue now.

    • @jfbeam
      @jfbeam 4 года назад

      Depends on where you get them, and how new they are. At work, they got on to the blade-center train for a year or two. The efficiencies that came with density were attractive -- 80 core processors, huge amounts of RAM, 10G networking, SSD everything, uses less power than a lightbulb, and is the size of a cellphone, etc., etc. (320core/U, ~500W/U) That is, until they started writing the checks. Then my age old process of buying other people's previous generation hardware for pennies started making _real_ sense. My way might take up 14x the space, and burn 4x the power, but at 1.5% of the cost.
      (And yes, I provided numbers for building our own (small) data center [283sq.ft, and it took 1.5yrs to get it built], put our racks in someone else's DC, and just rent all of our infrastructure. Building our own was massively cheaper. And that tiny room wasn't cheap -- about 20% of the cost of those blade systems.)

  • @robertjung8929
    @robertjung8929 4 года назад +11

    fun fact: each PSU provides 12V @ 200Amp (the newer 2.4kW ones) and the backlane has a solid copper plate to distribute that 1kA to the blades :D

  • @MrKillswitch88
    @MrKillswitch88 4 года назад +52

    Always kinda sad to see old machines like this get broken down for scrap on youtube especially those IBM systems.

    • @elgicko
      @elgicko 3 года назад +1

      It's HP!

    • @IanBPPK
      @IanBPPK 3 года назад +5

      These machines are destined for scrap eventually. The storage, CPUs, and Memory tend to be resold however.

    • @seshpenguin
      @seshpenguin 2 года назад +2

      @@IanBPPK There is a nice market where these chinese board manufacturers take out components (like the chipsets) from these proprietary motherboards and make "brand new" motherboards in standard ATX sizes and with modern features like NVMe. It's actually a pretty good idea, cause a lot of these parts on the boards would otherwise be scrap.

  • @opiniononion919
    @opiniononion919 3 года назад +2

    This looks just perfect.
    They are still reasonably powerful for non enterprise users and more or less dirt cheap.
    You can rent some space at a data centre and don't even have to worry about power consumption or stuff like that. And iLO makes it so I can still handle it like a home server.
    Just add a 4U NAS Server and you're good to go I'd say. You got your own decent server setup for under 5k.

  • @KeplerElectronics
    @KeplerElectronics 4 года назад +13

    Always love the server decommission videos, it's neat getting to see some older enterprise-level stuff that I'll never get to see IRL.

  • @jonr3671
    @jonr3671 3 года назад +2

    Yeah, I ran a C7000 with 1x G1 and 2x G5 blades for a home lab. I had to hook it up in my detached garage because as it was little loud, even in it's "running" mode. I ran it with 3 Power supplies and all the fans on 220v. My power bill only went by up by $100 a month.

  • @shmehfleh3115
    @shmehfleh3115 3 года назад +3

    I used to work on an HP blade chassis like that, in a storage QA lab. Ours had a bunch of brain-damaged Brocade FC switches in those pass-thru bays that they called access gateways. They were used to connect it to a bunch of HP-branded storage arrays.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      You don't know brain-damaged Brocade FC switches until someone tries to update the firmware on one and ends up accidentally deleting the kernel. Wasn't actually me that time, and Brocade refused to send us the USB key that contained the full image with the kernel. Oh, did I mention that the Brocade FC switches only work with Brocade USB drives? Because they do.

  • @LukeStratton94
    @LukeStratton94 4 года назад +118

    Disappointed you didn’t fire it up so we could hear the ‘take off’

    • @timmooney7528
      @timmooney7528 4 года назад +6

      We could witness his power bill go up, too!

    • @Fronzel.Neekburm
      @Fronzel.Neekburm 4 года назад +2

      Its not possible to plug each PSU's into 5 standard wall sockets to power it up. If you look closely at the back power sockets they are different from standard PC/server plugs. From memory I think each plug needs 30A to drive it. Dells are the same. Awesome bit of kit.

    • @bigbassjonz
      @bigbassjonz 3 года назад

      Requires 3-phase power to run this chassis. I had 2 c-7000 chassis's and 2 c-3000's.

    • @Ramdileo_sys
      @Ramdileo_sys 3 года назад

      13.7 ampers!!! (5 air-conditioned like mine).... and that thing have 6 of thouse PSU....... in just one of the hundreds of trolley like this one in a datacenter.............. This is why I choose not to ruin my eyes and continue using my Incandescent bulb.. and enjoy the fuel car............. There is not enough surface on the planet to put the necessary Solar panels for Europe datacenters.... not even mention the ones in America......... and for what??.. for facebook and twitter????

    • @thisnthat3530
      @thisnthat3530 3 года назад

      @@bigbassjonz It'll run quite happily with all 6 PSUs plugged into a single phase. That's how I have mine set up.

  • @bobblum5973
    @bobblum5973 4 года назад +1

    I worked with quite a few of these enclosures and blades, multiple generations. It can run with a single Onboard Administrator ("OA") module, but as I discovered that little display panel only works from the primary one. OA1 was dead, thought it was the display itself. Read the docs, swapped the two OAs and the display worked fine. Ran on one OA during the build & testing while waiting until a replacement OA was shipped.
    When you were showing the expansion module underneath that small clip/tray you removed, that tray holds the battery for a storage array accelerator. It provides power for the controller cache memory, so you can speed up reads & writes on the RAID controller. Helps performance a fair amount, but those proprietary batteries tended to wear out.
    For those of you on a (power) budget, there's a C3000 chassis that only has eight slots, and can be set up with casters for office use.

  • @xXfzmusicXx
    @xXfzmusicXx 4 года назад +17

    You can get storage blades that store up to 12 2.5" per blade, so storage isn't really a big issue here, paired with 10gb ethernet cards you can have a good base for virtulization

  • @JonasDAtlas
    @JonasDAtlas 4 года назад +19

    Fascinating stuff. I'd love to get to play with things like this some more, but alas I mostly work for small businesses that are happy to even have one server.

  • @ulrikcaspersen9145
    @ulrikcaspersen9145 3 года назад +3

    I recently saw a system very similar to this, but instead of each blade being their own server, they could be turned on and off with PXE and then that blade would be a temporary host for one or more virtual servers; this was meant to able to use as little power as possible with little or no activity by hosting a number of VMs on one or two physical servers, and whenever demand increased one or more additional physical servers (blades) could be turned on to host some of these VMs.
    If I recall correctly, the software used here was MaaS (Metal as a Service), by Canonical, for managing the hardware and host the VM. And I would to make one thing very clear, I am not sponsored by or in any way affiliated with Canonical.

  • @augurseer
    @augurseer 4 года назад +3

    Remember setting one up for a client. Buying the chassis. The blades. The whole thing weighed a ton. Loud as hell!!! It was about cpu density. But had some complexity in its setup and use. I think Epyc Rome and VM kill the blade idea pretty hard.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      Not as much as you might expect, since Dell is moving to high-density compute nodes by making 2U blade chassis with 4 blades supporting 2 Rome processors per blade. I believe their intention was to pair them with in-rack switching and a PowerStore or PowerMax storage array for VM storage.

  • @TheOborune
    @TheOborune 4 года назад +23

    I worked in an office where we had one of these used as a coffee table and I walked into the corner of it and I still have a scar on my leg from it. thing is stupid heavy.

    • @longnamedude3947
      @longnamedude3947 4 года назад +6

      It was self-defense!
      Hopefully you learnt that these things bite as hard as they bark.
      Or howl, seeing as those fans push so much air that the whole thing may as well be used to drive around town..... Lol

  • @3ffrige
    @3ffrige 4 года назад +4

    In some ways, virtualization made these platforms even more popular. For example, if you’re running a docker environment with kubernetes for orchestration, you can dynamically scale up your compute power on demand if your apps need it. You have the dynamic scalability of 16 servers if you need the additional power. Also, blade servers are also used if you need massive compute power. If you have applications for weather forecasting, hardcore video rendering, computing hashes for bitcoin mining ( I know they have dedicated ASICs doing this, but it’s the only thing it’s good for; more flexibility on a generic compute platform), or for computing mersienne primes for whatever reason, these platforms can give you supercomputer powers in the footprint of a single rack.
    Another thing I love about these platforms is you literally can setup your networking environment remotely without having to plug in gobs of 10GE AEC copper cables. For example, a Cisco UCS B platform with Nexus 6K fabric, you can create a sh!tton of virtual interfaces for the blade servers and uplink them to whatever number of port groups you desire, all remotely.
    Of course you have to physically install it in the lab or data center, but after that, you can setup everything remotely via CIMC (or iLO, DRAC, whatever your favorite brand servers is) including installation of the software on the blades as well as configuring whatever networks you require on the switch fabric.
    I love blade servers for those reasons. They’re also a huge headache, because the fabric is another layer of complexity, especially in virtualized environments. You can have a docker/KVM environment running in a virtual machine in VMWare, going over virtual ports then to virtual physical ports from the blade servers VIC, then to the virtual port groups in the fabric, through a vlan in the fabric, then finally coming out of a physical port on the switch fabric. Yup, flexible ecosystem but a pain in my ask.

  • @PaulPaulsen
    @PaulPaulsen 3 года назад +1

    this takes me back... i was invited to Lyon to get teached on HP CCI back in 2004. those were 20 PC´s in a single chassis to which the user would connect via thin clients over RDP. interesting stuff back in the day :D

  • @djangoryffel5135
    @djangoryffel5135 4 года назад +3

    0:55 - it's a C7000 G2. Other than the G1 it does already have the wider display but the backplane doesn't support that much throughput as the G3 or G4. I used to have a G1 as well as a G2. Now I have a G4 and a G3 as spare.
    For those who are wondering, the design of a C7000 G1 is like the Proliant Server G3 and G4 series, the C7000 G2 design is like the Proliant Server G5 series, the C7000 G3 design is like the Proliant Server G6 and G7 series, and the C7000 G4 is like the G8 and G9.
    Some parts from G1 are still compatible with G4, all in all, a fascinating machine, but quit annoying to manage.

    • @ingy23
      @ingy23 4 года назад

      Still running G1 and G2s lol. Connect at 160gbps without issues but yer replacing soon as no new blades are coming out for them now.

  • @davefarquhar8230
    @davefarquhar8230 4 года назад +3

    I worked on some a generation or two older than that model. I liked them because they were really well built and when they did fail, they had lots of diagnostic LEDs so I knew exactly what failed and what to do about it. That really helped minimize downtime.

  • @retromaniac4563
    @retromaniac4563 4 года назад +1

    We aquired the C7000 chassis and blades as hosts for virtualization that replaced rack servers. Now hypercoverged rack servers replace blades in many cases.

  • @jdkingsley6543
    @jdkingsley6543 3 года назад +6

    I love these, the cloud and virtualization dependency is going to bite us in the rear sooner or later.

    • @bjre.wa.8681
      @bjre.wa.8681 3 года назад +3

      I agree. Even with virtualization you still have to have some quality hardware, memory especially and still can use the Sans if needed. There's quality configuration time going into all the VM stuff and damn well better be monitored (maintained)

  • @streetsafari0
    @streetsafari0 3 года назад +2

    Blades fell out of fashion as hosting centers became sensitive about electricity use. As the cost of colocation went up, companies bought in space saving blade systems. So server densities went up dramatically, which led to colo and hosting centers separating out their charges for electricity, which became a considerable charge for the customer. The solution to that was the vm. The issue was that most of these physical (blade) servers did very little, but still consumed 75-100W each. A vm could sit idle and cost a fraction of that. This method not only reduced electricity costs, but colocation footprints. Colocation by the mid 2000s in places like London Telehouse was already pushing towards 5 figures a year for a single rack, before the electricity cost was added.

  • @memorekz
    @memorekz 4 года назад +10

    Server boards and chassis are so beautifully engineered, compared to the now ancient ATX format.

  • @I_am_Allan
    @I_am_Allan 4 года назад +8

    This video served well to explain some recent "historic" servers.

  • @joelspan
    @joelspan 3 года назад +4

    I used to be an admin for two stacks of these - 4 chassis on each side. If those cabinets weren't bolted the ground, pretty sure it would fly.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      Having moved a chassis on my own using an equipment lift, then repopulated with fully with 16 blades, I can confirm that despite the impressive cooling performance, they weigh a metric shit-ton and wouldn't budge an inch even if they're not bolted down.

  • @DoctorX17
    @DoctorX17 4 года назад +114

    128 cores, up to about 1.5TB of RAM... Could still be fun to play with.

    • @pompshuffle562
      @pompshuffle562 4 года назад +71

      It's all good and fun until you see how much it added to your power bill

    • @Darth001
      @Darth001 4 года назад +6

      I'd buy one

    • @kaitlyn__L
      @kaitlyn__L 4 года назад +27

      @@pompshuffle562 if your heating is electric anyway and it displaces other heating devices, you can run it for "free"! :D

    • @pompshuffle562
      @pompshuffle562 4 года назад +13

      @@kaitlyn__L That is, until it's above 90 degrees Fahrenheit outside

    • @kaitlyn__L
      @kaitlyn__L 4 года назад +8

      @@pompshuffle562 well of course, if you're not wanting to run heating anyway. I did say if it was displacing other heating, after all

  • @sillyvain6401
    @sillyvain6401 3 года назад +2

    I have one of these and its running in my condo with 125V power supplies I got 16 servers as well. they are still good you can upgrade the processors to dual x5675 6 cores with a passmark of 12000+ or you can put up to Gen 10 blades into it... I got a mix of Gen 6's and Gen 8's currently with a storage blades of 12 disks. I am running Proxmox onto this has 14 Clusters and ZFS. You can change the HP P410i and P420i raid controllers into HBA Mode... I am slowly converting everything to gen 8 and Gen 10. The problem is that the price of an intel processors in the Gen10 are very expensive Under the Drive bays on the BL460c G6 blades there is a USB connector if you want to run Unraid and it works too. I am running 900GB 10K SAS on these servers... the sata speed on these is only Sata II. The HP C7000 blade is being replace with the hpe synergy 12000

  • @jasonsimonsen4184
    @jasonsimonsen4184 4 года назад +6

    Love the C7000.... however they do have a blade that is a storage so no need for external SAN. Ours stores 32TB. I run Proxmox and kubernetes, and even for a 8 year old system, it still really preforms.

  • @jsawhite
    @jsawhite 4 года назад +5

    Hadn't seen those in a long time! We actually had 4 of them when I worked at Purdue University several years ago. We were one of the first groups that bought HP's new Blade System Matrix to build out our SAP environment. We ran VMWare ESX on each of them and had 2 HP EVA's (one in each building with 2 chassis) for storage connected via fiber channel. It was quite a setup and had a lot of issues. But when it worked it was very nice!! Quite outdated by today's hyper-converged standpoint... :)

  • @AgustinCesar
    @AgustinCesar 4 года назад +25

    "....but there was a time when virtualization was still just catching on, and companies needed physical servers to be as dense as possible. So let's look at a blade server system from around 2010..." I FEEL OLD... and I am just 40...

    • @gorillaau
      @gorillaau 3 года назад +1

      Same here. I remember when my ex-workplace installed ond of these at the data centre. Very heavy piece of iron! We had about half an inch of clearance down the cold aisle to get the chassis into the rack.

    • @ChairmanMeow1
      @ChairmanMeow1 5 месяцев назад +1

      Im same age as you. I get it. :\

  • @tad2021
    @tad2021 3 года назад +2

    I got 2 of these chassis in my lab. Got a crazy deal on both NIB for almost the cost of shipping. Only ever used one and have had just a few blades running over the last 4-5 years though. For lab stuff, it's handy being able to spin up extra nodes by just shoving them in.
    Mainly, it's just cool.

  • @Bierkameel
    @Bierkameel 4 года назад +1

    I used to manage a lot of these, with half height and full size blades. The performance was pretty good but they were really expensive.
    We switches from HP to 1U Dell servers and could not be happier.

  • @BangBangBang.
    @BangBangBang. 4 года назад +1

    Data centers are still using them as "hybrid servers" where virtualization will split the server into two or more virtual machines. Usually QEMU/KVM. Or these dirt cheap $10-20/mo dedicated "instant deployment, no upgrades" servers too.

  • @jafirelkurd
    @jafirelkurd 4 года назад +5

    They did make storage blades for these too, so that you didn’t have to use SAN.

    • @bobblum5973
      @bobblum5973 4 года назад +1

      Yes, they did! SB40 was a six-drive model as I recall. I know you could pair it up completely with one of the server blades, but I think you could also configure one Storage Blade to offer drives to multiple server blades (but don't quote me on that).

  • @RetroPanic
    @RetroPanic 4 года назад +15

    Know it well, just decommissined one also! Funny to see

  • @CobsTech
    @CobsTech 3 года назад +1

    I picked one of these up a few months for $50 without any blades. Really fun setting it up but holy does it suck power and make a lot of noise. 350W idle with no blades installed.
    Sad to see it decomissioned though, I stuck a few G8 and G6 blades in mine and it performs quite nicely. As far as I'm aware, these were designed to be 'never obsolete' by having techinicians simply remove old blades and install the newer G8-G10 ones.

  • @MegaDraadloos
    @MegaDraadloos Год назад

    I love these. Have a C7000 with 12 G8 blades at home. Couple of 12 bay storageblades in it for vsan, HPE 3Par with fiber connected to the C7000 and blades. Very rock solid, flexible solution!

  • @markarca6360
    @markarca6360 4 года назад +4

    These are HP ProLiant BL560c G5, G6 or G7 in a HP BladeSystem c7000 enclosure.

    • @blajek
      @blajek 3 года назад

      you can add Gen9 and Gen10 :)

    • @bigbassjonz
      @bigbassjonz 3 года назад

      I ran BL460c in mine along with a few storage blades.

  • @bobblum5973
    @bobblum5973 Год назад

    I worked with many of these, and all the features you spoke of, in one overall system of blade servers, chassis, redundancy, was simply amazing.
    I wrote Windows command line scripts that accessed the servers and iLO through PuTTY's "plink" ssh, gathering data and generating reports on things like pending RAID accelerator battery failures, RAID drive status, firmware versions, just about anything.

  • @TheSeanUhTron
    @TheSeanUhTron 4 года назад +1

    Blades are still useful if you need tons of bare metal servers and don't have much space. That's not a very common need anymore, but some companies still need that.

  • @tylertc1
    @tylertc1 4 года назад +1

    Really enjoyed some of the more in depth and behind the scenes look at these blade chassis. I'm someone that tinkers and follows enterprise hardware, but at a hobby level. But I love the engineering, etc. Thanks for shedding some more light for those of us that haven't had hands on experience with these; their purpose, in's and out's with the backplane feature sets and layouts. I was always curious how the ethernet ports were actually mapped with these.

  • @GGBeyond
    @GGBeyond 3 года назад +1

    I was contemplating buying one of these at one point because I needed a lot of cheap bare-metal servers to run a MongoDB cluster. It was the relatively high price that ultimately put me off. I ended up acquiring a bunch of 2U 4-node servers that I managed to pick up for $60 per chassis; 12 nodes in total, and 4 of them currently in my server rack. These older servers still have a use if you're not using them for anything compute-heavy.

  • @simongeorge2505
    @simongeorge2505 3 года назад +1

    I run a global virtual private cloud based totally on HPe blades, 3500 of them. They are a mix of C7000/BL460's and newer Synergy SY480 and SY660 units. All run VMware ESX. We have about 70,000 VM's running on them. They all boot off HPe 3-par storage

  • @chirkware
    @chirkware 2 года назад

    I had two of these at my prior job--one at the data center (with four blades), and one at the DR site (with two blades). We caught a deal where you got the chassis free with something like three blades. At our data center, one was a vCenter and the other three were VMware hosts in a cluster. Similarly the DR site had a vCenter and a host (all our VM's could easily run on one). Regarding the switching, there was "backplane switching" IIRC, so you didn't have to cable every individual blade. We had dual SFP+ blade switches forming our core, and we ran a SAN with 10Gbe DAC cables to those blade switches (we did NOT use fiber channel, it was all iSCSI for us). From there, a pair of 2910al switches with SFP+ modules linking to the blade switches, and from there, out to the LAN. Pretty sweet setup, though the smaller chassis (half this size) would have been sufficient.

  • @jafizzle95
    @jafizzle95 4 года назад +2

    At the last company I worked for, we had just finished acquiring a new Cisco UCS blade chassis for use with virtualization. I think it was 56 cores and 1TB RAM for each blade and 6 blades per chassis. It wasn't my department but I overheard a lot of the discussions leading up to the purchase in 2019.

  • @mojamb0
    @mojamb0 Год назад

    Wow, this was a blast from the past. I used this server before. I remember when you start the server, it sounds like a jet plane.

  • @scifisurfer8879
    @scifisurfer8879 4 года назад +2

    Hey, Colin, as someone who's trying to get back into the professional tech industry, I loved this video. I was kind of curious what it was you did "IRL" as they say.
    I'm presently working on my Network+ certificate. What would you suggest as a good path to get into whatever it is which exists today?
    And, without revealing personal information like the name of your employer, could you talk a bit about what part of the industry they're in, and what you do there?
    Thanks so much!

    • @ThisDoesNotCompute
      @ThisDoesNotCompute  4 года назад +3

      For getting into the network/sysadmin field, I'm a big fan of working one's way up. Yeah, help desk jobs suck, but I think it makes for better admins as they've experienced firsthand the needs (ire, wrath, etc. lol) of their end users, so they'll keep them in mind when making changes or prioritizing repairs. Network/sysadmins who don't care about their users are why the field has such a bad reputation. Through time and experience you'll work your way into more advanced roles. I have mixed feelings about industry certifications; when you're new they're a reassurance to potential employers that you (are supposed to, at least) know the basics, but in more advanced roles your experience should be more important.

    • @scifisurfer8879
      @scifisurfer8879 4 года назад +1

      @@ThisDoesNotCompute Thank you so much for responding! I have no objection to starting at the bottom and working way up. If nothing else, I'll at least feel like I've earned my place. And yes, I feel the same way as you about end users. If you don't have a feel for what your actions will cause (good or bad) you probably shouldn't be in that field.
      Thanks again.

  • @ncc17701a
    @ncc17701a 2 года назад

    I've built many or these over the years. One project had 28 fully populated enclosures. I spent quite a lot of time building those (even using Puppet to do some of the basic deployment/configuration). HP also sell Virtual Connect modules - a way to consolidate and reduce the number of physical connections. Deployed that on a few other projects. A c7000 chassis can also take full-height blades and storage blades.

  • @stijnl13
    @stijnl13 2 месяца назад

    I still have 2 chassis running with 2 full height Itanium blades each, dual virtualconnect and dual fibre channel switches connected to Dell VNX :) Software migration away from our mainframes takes longer than expected. Rock solid!

  • @0wnage718
    @0wnage718 4 года назад +1

    I worked somewhere where I ordered and set up one of these was a nice kit in the day, you could have full height blades that took up two of the slots but you could have 4 procs. I do remember we had a chassis fault and had to take the entire thing apart to swap it over, wasn’t funny. It was powered by 3 phase as well. Had to get special Eaton ups’s for it as well. The idea behind these at the time was great as they were so modular and flexible but the prices for the smallest thing was astronomical. I can remember the interconnects on the back costing more than the servers

  • @BoeroBoy
    @BoeroBoy 2 года назад

    This video is perfect. I just nabbed an old C7000 and a C3000 for a PoC I want to do for customers. The C3000 I'm donating to the local high school to teach kids about tech. I used to work with these when they cost hundreds of thousands. Now they're perfect for teaching kids about the internet. Kudos. Referring the kids to this video for a crash course.

  • @lohphat
    @lohphat 3 года назад

    I've only used HP Proliant blade servers. Not for running descret applications but as an in-house VMWare cluster with redundant 10GigE networking to NetApps for storage for instant snapshots and fast migration of VMs between hardware hosts. Now the blades support higher core count CPUs.
    These are great for dev build systems where people's time is valuable and the build turn-around time for submissions and smoketest builds must turn around in minutes.

  • @MICHAEL23505
    @MICHAEL23505 3 года назад

    i use these at home. bought them on ebay just to play around with. about 5 years later i decided to start a wireless isp, and i knew they would come in handy. i have 4 of the 480c g1's, now that i want to virtualize, i'm looking at getting a few of the full height servers.

  • @TheMooMasterZee
    @TheMooMasterZee Год назад

    Blades were great, even without virtualization, for heavy compute workloads. Several earlier MMORPGs used blades for their backends as data was all stored in a separate database and the shards just needed to communicate with all the clients and do all the server-side rng, management, and physics computation. One company has auctioned off individual retired blade units with a fancy acrylic cover at various points to earn money for charity.

  • @ChairmanMeow1
    @ChairmanMeow1 5 месяцев назад

    I interned at Caterpillar years ago and got a look at some absolutely giant HP server racks. Always found it so cool.... literally. Had to A/C the hell out of the area because the computers made so much ambient heat.

  • @Bitterforever
    @Bitterforever 4 года назад

    Takes me back! I poured over the Quickspecs docs for these back in the day but never saw them in person. Great to see in this video! Thanks!

  • @douglaskinsella5610
    @douglaskinsella5610 4 года назад +2

    Nice job, I didn't know about that little SDCARD slot.
    I'd love to see a run-through of the iLo (web) interface and a tour of the little LCD status screen...

  • @TheFatDemon
    @TheFatDemon 4 года назад +2

    We still use these for VM clusters, G9's at this point. Our model allows us to select whether to have the PSUs in combined or redundant mode. Combined allowed us to use the capacity of 6 PSU's while the redundant mode did a 3+3 setup so that we can have the PSU's fed from different sources.

  • @JimLeonard
    @JimLeonard 4 года назад +37

    "In what industry are they decommissioning a blade chassis?"
    (notices they're all G6s)
    "Never mind!"

    • @robertjung8929
      @robertjung8929 4 года назад +2

      G6 is pretty old... they roll Gen10 nowadays.

    • @retr0nus
      @retr0nus 4 года назад +5

      Gen8 should be a good start, any lower than that and say hello to the expensive powerbill that just arrived.

    • @connorc2926
      @connorc2926 3 года назад

      @@retr0nus I have a Gen7 and lemme say that the power bill isn't really that high probably only $10-15 extra due to 8 drives connected (All SAS drives) and two X5650's equipped with 32GB of ram (16GB per cpu). I have no complaints with the DL360 G7 but I had to let it go yesterday (5/1/2021) due to it being outdated and the recent purchase of a DL380p G8.

    • @retr0nus
      @retr0nus 3 года назад

      @@connorc2926 Cool, was over generalising my previous comment. Although it does depend place to place too unfortunately.

  • @capability-snob
    @capability-snob 4 года назад +1

    Most fun is to load these up with integrity blades for crazy throughput. If i had room in my office...

  • @tvandbeermakehomergo
    @tvandbeermakehomergo 4 года назад +2

    Our workplace had one of these in storage (brand new) and I actually started to populated it for a test enviroment (we merged with another IT department so this was surplus). Turned out they only ever ordered one blade for it so the whole idea was canned and the system was scrapped. Such a shame!

  • @userbosco
    @userbosco Год назад

    I used to sell IBM's competitive blade solution back in early/mid-2000's. It had four BLOWERS on it for cooling, not fans. If I recall, it took a 60A 208V 3P circuit, or circuits. Nuts.

  • @gleefulslug
    @gleefulslug 3 года назад

    This is really interesting. My dad works at Hewlett-Packard enterprise and I’ve seen a ton of these blade servers.

  • @mr.h.4501
    @mr.h.4501 3 года назад +1

    Dell sent us one of these to test it in our data center when it came out. I set up the sever and almost all the firmware was alpha and pre release. After 13 months Dell let the company keep it because it cost them too much money to pack and ship it back to them. If I recall the server actually had a Cisco based switch built in the back of the server and ran a weird version of cisco’s ios.

  • @petripuurunen2491
    @petripuurunen2491 Год назад

    Hi Colin! Its not even old, we still had lots of same c9000-enclosures fully housed in use in 2020. VirtualConnect and server/bay-profiles were pretty neat when HP brought it to market.

  • @lurkersmith810
    @lurkersmith810 3 года назад +1

    Contrary to what Sales will tell you (assuming you're always dumping last month's model for the latest and greatest), those old G6 servers are still out there in the thousands, still running, and I bet there's a pretty good chance that this video is streaming through some of them lurking at telcos around the planet.
    Blade type systems are not dead yet, and HPE and Dell still sell them! Some even run VMWare hypervisors, but some are combined to make bigger monsters.
    Two big things coming out as far as density is concerned: Many multiple blades (HPE's current "blade" type product is called Synergy, but they also have smaller, simpler systems called Apollo "sled" servers) They can be virtualized in a way HPE calls Converged Infrastructure, or they can be combined to form High Performance Compute Clusters (HPCC) to act as one large system for weather mapping, statistics, and I suppose crypto mining (though no one tells us if they're crypto mining). In many cases, individual servers can be "drained" or configured out of the larger cluster while it's running, and serviced, and then rejoin when fixed. (A lot of Linux runs in those clusters!)
    Also, there are huge systems of multiple large, interconnected chassis to make one big system with tons of memory interconnected (HPE's Superdome Flex, for example.) On those, you have to shut down one or more racks of several chassis to service anything in the system, because they're all one system and they run full databases direct from terabytes of RAM. (For redundancy, a company would have to have ANOTHER Superdome beast in a cluster!) You don't find VMWare running on the Superdome servers, nor do you find it in High Performance Compute Clusters made up of hundreds of blade or sled type servers. What you do usually find is one or many optimized versions of Linux running very specific tasks. These are not your Exchange Administrator's systems!
    You typically find this stuff in huge data centers with hundreds of racks and thousands of servers (where you need grid coordinates to find what you're supposed to work on). Not so much in a small office's single or handful of racks in a room.

  • @bigbassjonz
    @bigbassjonz 3 года назад +1

    I hate seeing companies retire this hardware. It's still incredibly powerful, even if you don't upgrade / replace the servers. Still makes a great platform for virtualization.

  • @mojamb0
    @mojamb0 3 года назад

    I remember having this at my old job site. The fans are like a jet engine when they startup. I remember being confused on how the network cards corresponded with the blades totally confused and so I didn't utilize it to its full potential anyway good memories of 2008-2010 time frame thanks!

  • @eliotmansfield
    @eliotmansfield 2 года назад

    Lovely bits of kit but the out of the box cost was very high before you even had 1 server, then once full you had to spend another huge amount of money to buy and network up the next chassis. Spent
    loads of time crunching the numbers and it was too expensive for our IaaS virtualisation environment and ended up buying good old DL380’s as the cost was more incremental. We did sell lots of c7000’s to customers who didn’t care about the economics

  • @IkanGelamaKuning
    @IkanGelamaKuning 2 года назад

    Had one in previous company I worked before. The same model. The lowest right, was used as email server. It worked fine in house, until transfer out to external hosting, because external utility power failure in weekend caused email access failure.

  • @PedroOliveira-lz3mk
    @PedroOliveira-lz3mk Год назад

    It is a great for deploying Oracle RAC clusters. We deploy them with two 10Gb VCs, two 16 or 32 Gb Brocade SAN switches and two Infiniband switches on the backplane, and distribute the cluster members on 2 or 4 IB interconnected C700 G3 enclosures. Now that architecture is being replaced by using stand alone servers using 100Gb ethernet and the cabling is a nightmare. Not to mention rack space.

  • @damich9215
    @damich9215 4 года назад

    I worked with the C7000 Gen3. It’s very nice and high performance server hardware and its the best enterprise blade system

  • @jeg1972
    @jeg1972 3 года назад

    The HP C7000 was the last hardware I had anything to do with. We had three of them all full of BL460 blades... The biggest problem we had was that one day we needed to power them all down and when we restarted, all of the FlexFabric (HBA) cards died.

  • @chirkware
    @chirkware 2 года назад

    Aww...Was hoping you'd show it powering up (somewhere, I have a video of mine powering up I think). After running one of these several years, I was there when it was powered down. Really was a weird feeling (our bank was bought out and I spent my last few months helping the new back decomission all my toys 😭).

  • @petethomson
    @petethomson 4 года назад

    I worked for a UK Mobile telco in 2008-2010 and I installed a fair few of these whilst there.

  • @tardvandecluntproductions1278
    @tardvandecluntproductions1278 4 года назад +1

    My company has like 10 DELL PowerEdge M1000e (if our main IT guy doesn't have any more hidden, the sneaky guy) running at 2 locations.
    Sure we do like all IT for 80-ish medium to small companies running over 2K virtual servers.
    For storage they all have their own storage beast containing nothing but SSD's over those fiber optic cables.
    I don't work much on the blades or the poweredge themselves, more on the windows machines within. Windows is probably the bottleneck here but 700MB/s is a normal day for me.

  • @seanh0123
    @seanh0123 Год назад

    I remember installing about 16 of these one night at an equinix colo in 2008 I think. HP shipped all the blades, all the CPUs all the RAM all the hard drives everything in separate packaging. I had to open like a hundred CPU boxes two hundred little RAM boxes you get the idea. After 8 hours of assembling all the blades and installing them there was a no kidding 8 foot tall mountain of trash in the hallway 😂

  • @georgeh6856
    @georgeh6856 2 года назад

    I worked on a similar product (non-Intel) from a different company a few years before this HP came out. The first generation of our product had problems but worked fairly well when we shipped it. The second generation, however, was a disaster. Almost the entire time we could not even install the OSes on it. The project manager who shortly before this got drunk and fell down at a company party as well as had pictures of him harassing a co-worker, was terrible. That guy broke all protocols, ignored our problem reports, and shipped the machines broken. Customers were not happy.

  • @HyPex808-2
    @HyPex808-2 4 года назад +4

    Still have these in production but with virtual connect and VMware. Was never a fan of the straight through switches.

  • @matthew.datcher
    @matthew.datcher 4 года назад

    It's amazing how long the c7000 lasted. I can still remember installing the old P series in the mid-2000s, switching over to the C series around 2010, and finally, the Synergy series starting two years ago. At one point I was managing six c7000s nearly fully populated. Thankfully they had the Flex10 modules otherwise it would have been a cabling nightmare. Those are amazing systems. Too bad most data center managers think you're crazy if you ask for that much power in so little space.

  • @TechIOwn
    @TechIOwn 4 года назад +3

    I remember when blade servers were the biz, it was a big deal when our company introduced these around 2010 or so...

    • @llothar68
      @llothar68 3 года назад +1

      They were popular until around 2016. Then people had enough for their overpriced vendor lock-in. But only AMD epic was able to dig a deep grave fro them and they died like on a server pandemic.

  • @yumri4
    @yumri4 4 года назад +1

    Blade systems are still good IF the higher management doesn't want to do analytical modeling in the cloud. It is good to have each process but on separate physical hardware then have a rackmount machine for the heavy processing. Virtual machines are good for like streaming out to the consumer facing and around 1 or 2 steps above consumer facing employees then for the CEO,CFO and others of their level of management.
    The thing is you can put your entire tech support team for customer support onto VMs as you can for the CEO though for your programmers, developer team(s), accountant(s) that do the number crunching for end of mouth, end of quarter and end of year stuff, and for the employees that run the physical transactions with your customers in physical stores need physical hardware. Sure you can put them all on a virtual machine network but watch your time to completion go up by x2 to x10.
    The thing i wish upper management got is virtual machines can't do everything. You do need physical machines for stuff. To that that cloud computing is not always the solution to not wanting to buy physical hardware.

  • @CoachOta
    @CoachOta 4 года назад +2

    I'll add my HP C7000 story: At a past employer, we had a project using the C7000 as the main compute for a private cloud project. This was around 6-7 years ago and our C7000s relied on a big NetApp for its main storage. We were using OpenStack to manage the cloud computing platform but found it was still not mature, ran into lower level driver issues and were generally understaffed for most of the project's life, so it wasn't the best experience overall. Still the HP hardware ran well and I can't complain about it. In retrospect, it probably would've been better to jump on AWS or even Google Cloud so we didn't have such a big initial capital spend.

  • @TheServerGeek
    @TheServerGeek 3 года назад

    Great video! I had an opportunity 5 years ago to purchase a complete blade system, set up originally as a Primary and Backup in two locations, with 4 C7000 Chassis, and 32TB of SAN storage and controllers for about US$1600. In hindsight I'm glad I did. It's a lot more kit than I needed but but the engineering in these units was awesome as shown in this video....Some similar technology was the DELL M1000s, but I think the Dell were a little better on power consumption(?).

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      The HPs are better to deal with long-term. Though now you'll be fighting with browsers and Java versions to access the iLO, but you'd have to deal with that for any of them but the Cisco UCS chassis, which actually did get HTML5 support before they ditched support for the original chassis for the new model. Despite possibly needing a dedicated host to manage the blasted thing, I'd still take an HP blade chassis over a Dell or Cisco one. They're more compute-dense and they aren't as much of a nightmare to configure profiles on.

    • @TheServerGeek
      @TheServerGeek 2 года назад

      @@mndlessdrwer Yes true. I have to keep a copy of the original windows 7 around just to have the correct browser version. Not super secure but it works.
      As you mentioned yes, they do seem to have a greater compute density. It seems IBM and DELL struggled to get Quad CPUs even into a larger form factor blade. At least from what I have seen in print.

    • @mndlessdrwer
      @mndlessdrwer 2 года назад

      @@TheServerGeek Dell has made significant strides in this area, but they seem to be primarily intended for their quad-blade compute nodes. Honestly, given my frustration with backplane issues and switch or passthrough blades on Dell blade servers, this would actually be my preferred approach from them. You get two expansion slots per blade, actually inside the blade itself, and you get some management ports for the chassis and dedicated storage per blade in a 2U form factor. It's pretty slick and works a lot more reliably than the FX2 chassis that it's intended to replace. While I do love the concept of blade servers and their chassis, the more complicated a configuration gets, the more I prefer to just deal with discrete PCIe cards and dedicated switches instead of relying on mid-plane switching and rear I/O cards in blade servers. Plus, it makes it a lot more straight-forward to implement NVMe-FC when you have dedicated and supported PCIe FC cards instead of virtualized ports being fed into a switch in the chassis.
      It doesn't mean that you can't make it work. I helped set up a Dell MX7000 that supported NVMe-FC, it's just more annoying.
      For more info about Dell's high-density compute nodes, check out this model:
      www.dell.com/en-us/work/shop/productdetailstxn/poweredge-c6525
      Edit: my go-to solution to dealing with relics of computing past which maintain usability long past when their manufacturer expected them to be replaces, and thus lack adequate interface support is to either have a dedicated host or a VM positioned squarely behind a good firewall with a VPN for access, and then have it run whatever browser, Java, or OS version is necessary to allow you to access your management interfaces. Then you can access this host through a remote desktop session to manage stuff with minimized risk to the rest of your infrastructure. It's still not as minimal risk compared to replacing the offending hardware with something using a current and more secure interface, like HTML5, but it's better than needing to expose a PC running an unsupported OS to the internet at large without an enterprise-grade firewall to act as an intermediary. Other pro tip: you can create an account with Oracle for free to download legacy versions of Java. I've found that version 7u45 is the most broadly compatible with older versions of iDRAC and CIMC, but you may need to downgrade all the way to Java 6 for some of the earliest versions of iDRAC on some Dell R710. Additionally, most legacy versions of CIMC require Flash, so you may need to use a portable version of Chrome or Firefox that was installed and then saved before Adobe cut the download access for Flash. Unfortunately, the vast majority of portable browser solutions do not come pre-baked with Flash, but instead run the flash installer and try to connect to Adobe's servers to pull the install media for Flash, which will fail nowadays. I wish you the best of luck with ongoing support of legacy devices!
      Edit #2: Other, other pro tip: get your hands on one of these:
      www.amazon.com/256-bit-Secure-encrypted-Drive-512GB/dp/B07Y4FR9H7
      Doesn't necessarily matter what size. Whichever you can reasonably justify to your manager or finance dept. or whatever. They're stupid crazy useful. You can load them up with bootable ISOs, even ISOs that aren't traditionally bootable from USB, like Cisco's HUU files. You can even create Windows-to-go images on vHD and vHDx partitions on the drive and select them as the boot media in the same way you can select any other ISO. Did I mention that you can select which media it presents as the boot media? Because you can. You plug it into USB (2.0 or newer, preferably, due to power limits of USB 1.x) use the D-pad and select buttons to navigate the folders and select the image you want to boot from, then restart the machine and select the iODD vCD or vHDD or USB, depending on which emulation method is necessary for your image. You can even write-protect the iODD Mini to prevent the imaging process from detecting it as storage and trying to write to it (VPLEX storage arrays will attempt this for connected media when they reboot after reimaging. It's very irritating.)

    • @TheServerGeek
      @TheServerGeek 2 года назад +1

      @@mndlessdrwer Lots of great info there. Thank you. 😃

  • @TJC450
    @TJC450 4 года назад +1

    Cvs stores still uses ibm blade servers. However, we’ve been switching to hp gen 10 proliants. Both seem to be very reliable.

  • @gcflowers86
    @gcflowers86 3 года назад

    If my memory does not fail, we installed those back then in 2010 - 15 to some clients and installed Vmware 3.5 - 4.5 Esx´s. those good old days.

  • @hendrikheinle5530
    @hendrikheinle5530 3 года назад

    You don't need to put a bezel in an empty slot. The blade center has a mechanism at the back that blocks the air flow when there is no blade in the slot. Modern hypervisor support CEPH, which works over the network. In my opinion, Fiber Channel is obsolete. The main problem with the C7000 center is more of the power consumption. In idle, we had 6400 watts of power consumption with 8 blade centers, only through the fans. After we switched to rack servers, we suddenly saved 1/3 electricity costs with the same hardware.

  • @konrad2801
    @konrad2801 3 года назад

    I used to work with those, it was really good to save datacenter space, I used to have big virtualization farms running on those back then...

  • @stevenkaeser8583
    @stevenkaeser8583 4 года назад

    I worked with that hardware 15 years ago. It’s amazing how the technology has evolved.

  • @digital0ak
    @digital0ak 3 года назад

    Good stuff! Had some Dell blade servers at my old job. They were ok at first, but not as needed once virtualization took off. Still good hardware, just limited.