Regarding 17:52, you can shut down a 3PAR controller via the CLI using the "shutdownnode halt" command. Or, if you want to shut down the whole 3PAR, you can use the "shutdownsys halt" command. You can also shut it down via the service processor menus. You are also correct that you can cut off power to the PSUs to initiate a graceful shutdown.
Why did they return it? They probably learned what matters. ;) 1.) If you need a lot of cheap storage (i.e. backup): Build or buy a box with a lot of cheap drives. 2.) If you need low latency and many I/O ops per sec: Directly attached storage tops everything that adds additional complexity between the application and the storage. 3.) If you need redundancy: Build a redundant application where you reduce the impact of a failing node. Let the application use a redundant Database that runs on something I described in point 2 that is backed up to something described in point 1. With the saved $150'000 you can come really far in terms of hardware. But what I said depends on another factor, you need qualified system engineers with enough time (not bound to other projects) to build the own specialized system. If you are a small company and don't have these resources, you may be forced to buy such a solution. And whenever a disk fails, you are forced to go back to the the original vendor. And if you want to get new features that come available with of the shelf hardware, you need to pay extra or buy a whole new system. And while 1TB SSDs become cheaper and faster, you still need to buy the expensive ones from the vendor of that system. Anyhow, this is only my opinion and as always, what someone really needs, depends on what he tries to achieve (or archive, in case of storage systems). ;)
While DAS has lower latency it is also local... Storage on 1 server cannot be accessed by another (unless you do some iSCSI or whatever but that would be slower than the 3Par system...) So you can spend $50K on an all SSD DAS for server 1, another for the next 2 servers and then you'll need to buy some high speed links and a very costly switch to sync data accross servers... This is not a system you buy if you have a single server... At that point scalability with hobby projects falls short and you need a SAN, especially when you need shared storage for virtualization which will not work with DAS. So in the end this solution will be cheaper and easier than building your own... Add to that the fact you always buy support with these systems so that when something breaks (disk, shelf, controller, cabling, software, firmware, whatever) you ring up HPE and tell them to deal with it. Or you setup the agent software which will do that for you. Especially larger companies will use such solutions. Facebook for instance uses their own server format, but still has an HPE storage system much like the 3Par for storage as it's cheaper, easier and scales much better.
Yes - in an ideal world. But actually, if you have multiple applications that need to be re-engineered in this manner, and aren't a devops house that can "agile harder" to get these projects done quickly, it's actually cheaper to spend some CapEx (complete with tax relief) on a lump of hardware and depreciate it over 5 years - the run costs are inevitably far cheaper than the team of ultra-amazing engineers that you need to run multiple highly available applications all built on different codebases, languages, app-stacks and architectures. Especially as they get bored easily and do other work.....or leave for somewhere more interesting. Infrastructure investment is rarely the right answer - but it's often a quick and cheap answer.
douro20 instead of additional space for a larger SKU, they might have pulled packages out until they met power/heat limits (perhaps their design was overly optimistic, or the chips performed worse than expected)
the chip difference is due for haveing 2 drives, a read intensive ssd (normal 9-12% op, write IOPS speed is limited to about 30% to maintain low qos access times under writes) and a mixed high read/write ssd but still maintain low qos latency times under high read and write loads (Norm Have 25% OP so be about 15% more flash space, drive norm stays the same size) norm more ££ as tend to use larger nand chip (512 vs 256 for example, so you actually find read intensive use all pads and mixed read/write use less pads because each nand chip is 2x larger on mixed read/write ssds)
the pattern of the thermal pads on the SSD when you first opened it lines up with the back side of the first board, not the front side that was in contact with the heatsinks... that makes no sense???
They applied thermal pads to all chips and also to the positions where the chips were not installed. You can see that the PCB is prepared for more chips.
Yeah that was about in 1985... :-) And I remember how people laughed in the 1970's about Japan products and japanese cars. The same situation as today with China. Maybe we will drive chinese cars soon and the will probably be not so bad. Ans all electronics cones from China, iPhone, Samsung, HP servers, Dell....
Most of our SAN infrastructure is 3PAR, or XP9500 which will be migrated to newer 3PAR. I hear much of the higher tiers of storage will be SSD now, instead of FC-SCSI.. I'll be curious to see how much the performance goes up.
I read about the battery-backed PSU a couple/few years back. Nice to see one and how it's set up. The latest ones can fit into the PSU slots of Proliant servers, with a cable-add option to a dedicated rack-mount UPS. Except it's only sparsely mentioned in the datasheets, and it was hard to find out what I did. (This was a few months back)
Supermicro has some Battery backed hotswap PSU's for their servers as well, with integrated Li-Ion Batteries (around 60wHr capacity i believe), and considering you can have 2 of them in a machine and they can last 5 minutes or so at full load each, a lightly loaded server with 2 of them actually has quite decent endurance on battery power. the supermicro ones just slot in like a normal PSU and are plug and play, fully self contained units meaning all they need is the AC cable on the back, and a chassis with the correct PSU slot. Battery status is reported via the PMBUS that PSU's use to report status. Quite nifty actually to have a 1kW PSU with built in UPS.
Nice to hear it's getting more wide-spread adoption. Do the SM models also have an option to connect into a larger UPS other than through the power cable?
Nah, other than your run of the mill UPS options with AC PSU's, 48vDC PSU's and the internal battery in the PSU's, those are your options. the 48vDC PSU's used with some creativity might give you some viable options where AC UPS's won't work and the internal battery in AC PSU's isn't sufficient.
I thought that it looks very much like like EMC Navisphere. But this is perhaps something like industry standard today. Or the software comes from the same programmers in India :-)
Its not a standard disk. Its an enterprise grade disk that is designed to run 24/7/365 non stop. High hour usage and data read/write. Totally different then a consumer grade disk. Also you are paying for the maintenance support on the drives as well. They have a phone home feature, that if a disk is failing, it will automatically phone HP with all info and HP will send a new one to arrive within 4 hours of drive report failure.
13:14 64gb? My debian laptop has 32gb and I have a ton of apps on it. I wonder if these 64gb are actually required for the controller, maybe 4gb would have been enough but there are no 4gb ssd with a comparable read/write perfomance! Aaah! And of course, if a power failure happens, it can dump the memory to the ssd, so maybe that is another reason!
If you switch off the PSU, to pull it out, the controller uses the battery power to save data to a disk and shuts down. But if the reason for pulling the PSU is because the battery is totally dead, then what ? Also seems a bit of a bad idea to drain the battery before removal, in case the 1:1,000,000 chance the power fails soon after refitting the PSU. Is there no option for the controllers to have A+B PSUs, so there is an option to hotswap PSUs without shutting down controllers ?
There are two power supplies with two batteries. If one is faulty the other will still work. The batteries are checked for health regularly and if one is not good, you get a message to replace it. I don't know how this system handles the batteries but Proliant servers check the battery at every startup and regularly when running. If something is wrong with the battery the cache memory will be disabled until the battery is fixed. So no danger at all... System runs a bit slower but it runs.
True SAS (Serial Attached SCSI) drive is not IDE or SATA, however SATA can work in SAS servers. Standart SAS it was the fastest (at its times) 10000 rpm in RAID configuration excellent with low MTBF , while IDE/SATA offered only 7200 rpm and high failures.
@@MikaelLevoniemi true, but if availability is a primary concern, and your workload can be scaled horizontally, it can work out cheaper. If your workload scales, and is very very bursty, AWS is definitely cheaper than having expensive tin sitting idle in a datacenter for 50% of its lifecycle. Equally, of course, the whole system may simply have been consolidated onto a bigger SAN with a shelf of modern 15TB SSDs that outperform this, as well has having greater storage density - the powersavings on those over spinning disks pretty can often self-finance after 24 months, if you're paying the electricity bill :)
@@0SteveBristow AWS is just fine if you spend less than a million a year on your infra. After that it becomes cheaper to employ admins and techs to install and spread around your own private cloud infra. Not as much bells and whistles, but more freedom to build your own tools. No point in using aws for thousands of servers with petabytes of data.
@@MikaelLevoniemi I don't disagree that you have to study your use case carefully: but having come from a business with an 18m annual budget, they were still able to close two entire datacenters by moving infrequent batch processing to the cloud : Azure in this instance, but the maths is much the same: the azure 'low priority' tier makes a huge amount of sense for the monthly conaolidation job, which runs for 3 days a month with several hundred multi-core servers. Likewise, when they had their annual customer rush, they scaled their app and web tier heavily into AWS. This lasted maybe 14days. Scaling into public cloud saved them the run costs of some 20 fully populated racks (conclude what you will about VM density). I agree, public cloud is often a very bad answer, and needs to be very very carefully considered. However, simplifying this analysis down to 'over a million a year' or 'it's just really expensive' seems a little disingenuous. Public cloud IS closing datacenters, and in many cases (again: I agree some are making a huge mistake!) quite rightly. There are also use cases for DR that entirely depend which country you're in as to their financial viability: companies with rack after rack sitting idle 'just in case' are able to avoid needlessly expensive tech refreshes by planning some cloud capacity for DR. Edit to add- thank you for an intelligent debate on this topic : it comes up in our industry often and few seem to have given it much intelligent analysis.
Nope, it's just meant to make 3Par stand out visually, which it does. Also I highly doubt most Google on-premise appliances would require such storage anyway....
Many of these systems from that era were by Xilinx, who nailed a modular system, with good documented standards, at a great price......for several models in a row. They were somewhat displaced by the "SSD revolution" and it's need for NVMe backplanes etc.
@@PlaywithJunk The newer Gen9 and Gen10's seem to have serious issues where they will spin up the fans to 100% when a non-HPE disk is connected... Older servers do the same but to 66% when non-HPE PCIe cards are inserted (DL380 G7 and DL380p Gen8 at least do this) and the MSA60 just outright refused older HPE branded disks such as their Maxtor disks. They were HPE branded but refuse to function within an MSA60 attached to P812 or P800, poke those disks in the server itself but on the exact same controller and they work just fine. It seems HPE just does it randomly...
@@someguy4915 Ditto this - Whilst older Proliants aren't so picky about disks, you can be damn sure that these storage systems will log an error with HP the minute an unknown is connected. Heck, NetApps won't accept a disk until it's been able to upgrade the disk firmware to the latest level - a process which happens automatically when the disk is inserted. The customer is expected to maintain an up to date "qualification list" of disks that are accepted by the system. On occasion, disks with unacceptably high failure rates are "de-qualified" - meaning that the system will report them as "potential risks" and the vendor will send replacements...purely based on their experience with various models.
@@PlaywithJunk Old post i know, but 3PARs ONLY accept known disks, and all disks have custom firmware, not even regular HP/E disks works in 3PAR storage systems - i talk experience :)
3:00 And ppl so you know so is SAS drives NOT the same as SATA (but I think they are backwards compatible but not the other way around) and especially enterprise ones which is pretty much the only thing they are ment for. The speed of these drives its faster and almost double the MTTF and can be on 24/7 compared to ordinary consumer drives and no matter the speed they might have. You simple pay what you get.
SAS drives are normally built to work 24x7 while SATA drives are designed for private PC use. As a rule of thumb you can say that a more expensive SAS controller can handle SAS and SATA drives while a cheap SATA controller can't handle SAS drives. Some drives are available with SAS or SATA interfaces, so the difference is only the interface chip. I'm not sure where the price difference comes from it that case... ;-)
@@PlaywithJunk I have once seen SAS drives that was not enterprise ones, thus ordinary hardrives. :D But for real SAS Enterprise drives you really get what you pay for though and as you say on 24/7 all day long. I have a little old small server my self that I got from my boss a year ago, a DELL PowerEdge T430 dual cpu 64GB mem but only 5 300GB SAS harddrives in it so it can't really be used for much ordinary things. Its only fun though its not near used for what it was built for. Enterprise hard drives cost so much, even used, so its not really an option to upgrade the storage in it. Sure I can put in a 4TB SATA but I will not gain anything at all on that in speed. The SAS will be much faster and just wait for the SATA to be done all ready.
They probably do and probably charge more for it. SuperMicro has some new all NVMe M.2 slot 1U servers that are really nice. Probably way cheaper. But the price of this isn't just hardware, but software & support.
M.2 is just a connector and so will not use any energy, much like the SAS connection itself does not use power... You can do Sata (6Gb/s) over M.2 which will be slower than these disks' SAS (12Gb/s dual port) or NVME which is limited to how many disks can be used and is only 1 connection per SSD so no redundancy there... M.2 NVME SSD's will actually use much more power than Sata/SAS, it's a big problem with such drives that they overheat and start throttling within minutes... This system is 24/7 and is capable of 100% load 24/7 (not recommended but possible) so NVME would use an enormous amount of additional power, cooling and cost for no profit... Anyway if you have two shelves with 24 SSD's each you get 160+ Gb/s anyway with so much IOPS that the individual connection doesn't matter...
As far as I remember I was able to access random 8kb block within Oracle database with access time of 1ms. That was measured from the perspective of a programmer, not sys admin.
@@jaroslaww7764 Ah yes, and database is normally not extremely fast (there are exceptions of course) and slower than a RAID 0 of several SAS disks. But 1 ms is not bad for that I might say anyway, cool.
@@dtiydr at the time I did it it was still the time of monolith applications in the company I worked for. You know, monolith web app written in Java plus big Oracle database plus such a storage device (now I think it was something from EMC). I had to optimize the time it took to present to the user the most used screen of app. Knowing that it takes 1ms for the retrieval of 1 block I knew I'm only able to access 1k blocks. Now, what I do are microservices and I would create separate database kept mostly in memory to serve same purpose and not to have to care about the access time.
Imho, with SAN array when you pay $6k for 1TB SSD it is like you pay $200 for hardware (similar OEM SAS SSD) and the rest for SAN software (allowance to run SAN software on SSD of this particular size). Just like for Oracle DB Enterprise you pay $57k for one CPU license.
Does it mean software itself costs flat, whether you run 6TB or 1000TB config? With SDS price usually increases with space usable (Ontap Select, ScaleIO, HPE VSA, Nexenta).
Unhappy with some of the accuracy here - yes, the system will attempt to cleanly shutdown a controller in a powerloss, but there most definitely IS a shutdown procedure. Enterprise SAS disks go through extensive extra testing prior to qualification - not only do SAS disk mechanisms get tested more aggressively by manufacturers (who will specifically rate them at far narrower tolerances than SATA disks, as well as usually guaranteeing them for longer, and supporting harder use) but the controllers have much smarter firmware, generally understanding and advertising failure mode behaviours much earlier, and measuring a lot more disk data than "SMART" gives you on consumer disks. Finally, the disks themselves go through additional testing by the system vendor, who's controller firmware will react appropriately to specific disk behaviour. Furthermore, a failed SAS disk will be replaced by a storage system vendor at their cost in 4hrs - or they pay a contractual breach penalty. These systems host millions (or billions) of dollars of data that can't afford to be unavailble - to the extent that RAID purely exists to allow an online spare to replace a failing disk. You then expect the failure to be replaced inside 4hrs to become the spare. So you're paying for a fair bit more than a bracket and a sticker. Disks can, and HAVE been re-read after a format. It's not simple, but if you have ultra-confidential data on there (maybe government, military, or significant financial) people can (and have) rescued scrapped, or even failed disks and recovered data from them. There is a reason standards like PCI-DSS explicitly require encryption at rest. SSDs with adequate airflow do not require heatsinks - however they DO benefit from significantly different wear-levelling algorithms when used in storage systems - stuff like storing journal blocks and data blocks in seperate disk areas is critical to their availablilty, as well as performance. Whilst I appreciate the need to simplify this complex technology, your summary seems overly reductionist. Yes, these systems ARE overpriced - but mainly because the value of them is not in the hardware, but in the service they provide....especially when it's to offer the lowest latency and highest throughput possible with the maximum of availability. Try doing synchronous disk writes across two computers and do some performance testing, and you will pretty quickly see where your money goes.
Glad to see it wasn't just me, there have been a couple times hardware and prices where mentioned throughout other videos and I thought to myself no that is certainly not correct, the suppliers do overcharge for the hardware relating to what you mentioned about replacements etc but I did think to myself the prices can not be for the hardware alone if you where to buy it off the shelf with no service warranty etc then it would be significantly cheaper. None the less I do enjoy PWJ videos as it is interesting to see some of the stuff however most of it I have seen before through working in a data center !
Well... open a company that specializes for IT hardware recycling and data erasing/destroying. This is a service that is needed all over the world. Every disk that goes to scrap probably has sensitive data on it. Customers don't have the time and equipment to take care themself. And if you recycle properly, you also get money from scrap metals and cables. And you're doing something good for the environment...
I use .254 for gateway on most networks. I have used .1 for some. I have seen it go both ways in different environments. It's not a big deal, and I don't get bent out of shape too much about it.
@@PlaywithJunk Can they be used with regular drives? Regular enterprise drives can be formatted to different sector sizes like 520 and 528 bytes. I had some 520 ones that I've formatted to regular 512 byte sectors and they worked normally in a normal server.
No. We tried that, but just reformatting with 520 blocks does not work. The 3PAR wants 3par drives. It checks the firmware and if it sees a foreign drive, it refuses to take it in.
Regarding 17:52, you can shut down a 3PAR controller via the CLI using the "shutdownnode halt" command. Or, if you want to shut down the whole 3PAR, you can use the "shutdownsys halt" command. You can also shut it down via the service processor menus. You are also correct that you can cut off power to the PSUs to initiate a graceful shutdown.
Why did they return it? They probably learned what matters. ;)
1.) If you need a lot of cheap storage (i.e. backup): Build or buy a box with a lot of cheap drives.
2.) If you need low latency and many I/O ops per sec: Directly attached storage tops everything that adds additional complexity between the application and the storage.
3.) If you need redundancy: Build a redundant application where you reduce the impact of a failing node. Let the application use a redundant Database that runs on something I described in point 2 that is backed up to something described in point 1.
With the saved $150'000 you can come really far in terms of hardware. But what I said depends on another factor, you need qualified system engineers with enough time (not bound to other projects) to build the own specialized system. If you are a small company and don't have these resources, you may be forced to buy such a solution. And whenever a disk fails, you are forced to go back to the the original vendor. And if you want to get new features that come available with of the shelf hardware, you need to pay extra or buy a whole new system. And while 1TB SSDs become cheaper and faster, you still need to buy the expensive ones from the vendor of that system.
Anyhow, this is only my opinion and as always, what someone really needs, depends on what he tries to achieve (or archive, in case of storage systems). ;)
While DAS has lower latency it is also local... Storage on 1 server cannot be accessed by another (unless you do some iSCSI or whatever but that would be slower than the 3Par system...)
So you can spend $50K on an all SSD DAS for server 1, another for the next 2 servers and then you'll need to buy some high speed links and a very costly switch to sync data accross servers...
This is not a system you buy if you have a single server... At that point scalability with hobby projects falls short and you need a SAN, especially when you need shared storage for virtualization which will not work with DAS. So in the end this solution will be cheaper and easier than building your own... Add to that the fact you always buy support with these systems so that when something breaks (disk, shelf, controller, cabling, software, firmware, whatever) you ring up HPE and tell them to deal with it. Or you setup the agent software which will do that for you. Especially larger companies will use such solutions. Facebook for instance uses their own server format, but still has an HPE storage system much like the 3Par for storage as it's cheaper, easier and scales much better.
Yes - in an ideal world. But actually, if you have multiple applications that need to be re-engineered in this manner, and aren't a devops house that can "agile harder" to get these projects done quickly, it's actually cheaper to spend some CapEx (complete with tax relief) on a lump of hardware and depreciate it over 5 years - the run costs are inevitably far cheaper than the team of ultra-amazing engineers that you need to run multiple highly available applications all built on different codebases, languages, app-stacks and architectures. Especially as they get bored easily and do other work.....or leave for somewhere more interesting. Infrastructure investment is rarely the right answer - but it's often a quick and cheap answer.
Yes, that SSD is a Hitachi one. More specifically it is an Ultrastar SSD1000MR. Many of the earlier Hitachi enterprise SSDs were made by Intel.
douro20 instead of additional space for a larger SKU, they might have pulled packages out until they met power/heat limits (perhaps their design was overly optimistic, or the chips performed worse than expected)
the chip difference is due for haveing 2 drives, a read intensive ssd (normal 9-12% op, write IOPS speed is limited to about 30% to maintain low qos access times under writes)
and a mixed high read/write ssd but still maintain low qos latency times under high read and write loads (Norm Have 25% OP so be about 15% more flash space, drive norm stays the same size) norm more ££ as tend to use larger nand chip (512 vs 256 for example, so you actually find read intensive use all pads and mixed read/write use less pads because each nand chip is 2x larger on mixed read/write ssds)
@@leexgx Also the 920GB is probably because of the 520byte format HP used for some odd reason ;)
Looks like a Xyratex chassis, as used in equallogic,netapp,storesimple etc
You have good eyes! The chassis is indeed made by Xyratex. I found this information in the startup log.
Once upon a time a company I worked for that time had one in their server room, along with some EMC stuff. Have seen it only once in my lifetime
the pattern of the thermal pads on the SSD when you first opened it lines up with the back side of the first board, not the front side that was in contact with the heatsinks... that makes no sense???
They applied thermal pads to all chips and also to the positions where the chips were not installed. You can see that the PCB is prepared for more chips.
That's a nice little screw driver...
I can remember when all the good electronics were made in Japan...
Yeah that was about in 1985... :-)
And I remember how people laughed in the 1970's about Japan products and japanese cars. The same situation as today with China. Maybe we will drive chinese cars soon and the will probably be not so bad. Ans all electronics cones from China, iPhone, Samsung, HP servers, Dell....
Where you buy this little screwdriver? Nice
.... i found 😎
still true, chinese things most often knock offs still.
Most of our SAN infrastructure is 3PAR, or XP9500 which will be migrated to newer 3PAR. I hear much of the higher tiers of storage will be SSD now, instead of FC-SCSI.. I'll be curious to see how much the performance goes up.
For how much do you sell the HDD I would need some?
I read about the battery-backed PSU a couple/few years back. Nice to see one and how it's set up. The latest ones can fit into the PSU slots of Proliant servers, with a cable-add option to a dedicated rack-mount UPS. Except it's only sparsely mentioned in the datasheets, and it was hard to find out what I did. (This was a few months back)
Supermicro has some Battery backed hotswap PSU's for their servers as well, with integrated Li-Ion Batteries (around 60wHr capacity i believe), and considering you can have 2 of them in a machine and they can last 5 minutes or so at full load each, a lightly loaded server with 2 of them actually has quite decent endurance on battery power.
the supermicro ones just slot in like a normal PSU and are plug and play, fully self contained units meaning all they need is the AC cable on the back, and a chassis with the correct PSU slot. Battery status is reported via the PMBUS that PSU's use to report status. Quite nifty actually to have a 1kW PSU with built in UPS.
Nice to hear it's getting more wide-spread adoption. Do the SM models also have an option to connect into a larger UPS other than through the power cable?
Nah, other than your run of the mill UPS options with AC PSU's, 48vDC PSU's and the internal battery in the PSU's, those are your options.
the 48vDC PSU's used with some creativity might give you some viable options where AC UPS's won't work and the internal battery in AC PSU's isn't sufficient.
Oh... And what did You do with taht HDDs? That kind of SAS HDD are VERY expensive in our country...
I wonder if they made the management app look similar to VMware's on purpose, to make IT guys feel more comfortable
I thought that it looks very much like like EMC Navisphere.
But this is perhaps something like industry standard today. Or the software comes from the same programmers in India :-)
the new management app is called SSMC (StoreServ Management Console), based on HTML5, and looks much prettier than the old one in this video.
Its not a standard disk. Its an enterprise grade disk that is designed to run 24/7/365 non stop. High hour usage and data read/write. Totally different then a consumer grade disk. Also you are paying for the maintenance support on the drives as well. They have a phone home feature, that if a disk is failing, it will automatically phone HP with all info and HP will send a new one to arrive within 4 hours of drive report failure.
Must have a really big "video" collection. lol
Dayum that hardware is just amazing awesome video
Thank you. I'm preparing a new packet for you..... some drives that will hopefully survive transport this time.
Play with Junk thank man love you
hi,
we have HP3Par 7000 with 32 Nos, 4TB Sata storage.its giving too noice ,pls provide the solution.
First tell me what the problem is.... is it too noisy? Only 4 TB? Did you mean PB...?
Pretty sure they use VxWorks not Linux for the controllers?
yes but the base of it is some sort of UNIX/Linux...heavily modified of course
13:14 64gb? My debian laptop has 32gb and I have a ton of apps on it.
I wonder if these 64gb are actually required for the controller, maybe 4gb would have been enough but there are no 4gb ssd with a comparable read/write perfomance!
Aaah! And of course, if a power failure happens, it can dump the memory to the ssd, so maybe that is another reason!
본 제품은 미국 또는 다른 국가에 등록되었거나 라이센스를 받은 히타치
글로벌 스토리지 테크놀러지스사의 특허(들)하에 제조된 제품입니다.
If you switch off the PSU, to pull it out, the controller uses the battery power to save data to a disk and shuts down. But if the reason for pulling the PSU is because the battery is totally dead, then what ? Also seems a bit of a bad idea to drain the battery before removal, in case the 1:1,000,000 chance the power fails soon after refitting the PSU.
Is there no option for the controllers to have A+B PSUs, so there is an option to hotswap PSUs without shutting down controllers ?
There are two power supplies with two batteries. If one is faulty the other will still work. The batteries are checked for health regularly and if one is not good, you get a message to replace it.
I don't know how this system handles the batteries but Proliant servers check the battery at every startup and regularly when running. If something is wrong with the battery the cache memory will be disabled until the battery is fixed. So no danger at all... System runs a bit slower but it runs.
Where the heck did you get that screwdriver?....
it's called ES120 and can be bought at your favourite chinese shop
@@PlaywithJunk Where all the best toys come from. Thank you!
i wonder whats on that internal SSH
SSH?
@@PlaywithJunk wups i ment SSD :D
@@dbmaster46446 I thought so... 🙂 On the SSD is the operating system of the controller. It's a Unix based OS . It is the boot drive of the controller.
@@PlaywithJunk but i was wondering if it is modifiable or bootable on a different system
I like the yellow colors
What operating system it was using?
It uses the 3PAR OS..... based on some sort of linux.
@@PlaywithJunk thanks
True SAS (Serial Attached SCSI) drive is not IDE or SATA, however SATA can work in SAS servers. Standart SAS it was the fastest (at its times) 10000 rpm in RAID configuration excellent with low MTBF , while IDE/SATA offered only 7200 rpm and high failures.
There are also 15000rpm SAS drives.
@@PlaywithJunk I know but first historical consumer server edition SAS was 10000rpm while other standard drives ide/sata 7200rpm
"We got it back from a customer that didn't need it anymore, I don't know why..."
AWS is the reason.
Amazon Web Services? I don't think so...
Not on this scale of operation, AWS becomes crazy expensive quite fast.
@@MikaelLevoniemi true, but if availability is a primary concern, and your workload can be scaled horizontally, it can work out cheaper. If your workload scales, and is very very bursty, AWS is definitely cheaper than having expensive tin sitting idle in a datacenter for 50% of its lifecycle. Equally, of course, the whole system may simply have been consolidated onto a bigger SAN with a shelf of modern 15TB SSDs that outperform this, as well has having greater storage density - the powersavings on those over spinning disks pretty can often self-finance after 24 months, if you're paying the electricity bill :)
@@0SteveBristow AWS is just fine if you spend less than a million a year on your infra. After that it becomes cheaper to employ admins and techs to install and spread around your own private cloud infra. Not as much bells and whistles, but more freedom to build your own tools.
No point in using aws for thousands of servers with petabytes of data.
@@MikaelLevoniemi I don't disagree that you have to study your use case carefully: but having come from a business with an 18m annual budget, they were still able to close two entire datacenters by moving infrequent batch processing to the cloud : Azure in this instance, but the maths is much the same: the azure 'low priority' tier makes a huge amount of sense for the monthly conaolidation job, which runs for 3 days a month with several hundred multi-core servers. Likewise, when they had their annual customer rush, they scaled their app and web tier heavily into AWS. This lasted maybe 14days. Scaling into public cloud saved them the run costs of some 20 fully populated racks (conclude what you will about VM density). I agree, public cloud is often a very bad answer, and needs to be very very carefully considered. However, simplifying this analysis down to 'over a million a year' or 'it's just really expensive' seems a little disingenuous. Public cloud IS closing datacenters, and in many cases (again: I agree some are making a huge mistake!) quite rightly. There are also use cases for DR that entirely depend which country you're in as to their financial viability: companies with rack after rack sitting idle 'just in case' are able to avoid needlessly expensive tech refreshes by planning some cloud capacity for DR.
Edit to add- thank you for an intelligent debate on this topic : it comes up in our industry often and few seem to have given it much intelligent analysis.
Sweet screwdriver. What wattage does that mod put out?
It's a commercial product called ES120
ok get this, that ssd is made by intel, for hitachi as indicated by the p/n, and resold by hp with their label on it, it went for a ride
...then assembled in Poland, shipped to Germany, then to Switzerland.
What screwdriver is that? It's awesome!
It is named ES120 and comes from china. Lots of videos about it....
ebay.to/2EPl1HM
Wouldn't recommend it, not powerful enough to disintegrate screws.
Marco Reps that's a good point no stripping of screwheads
Marco Reps
I bet your homemade one is much better at taking stuff apart, plus it is cooler.
:P
I thought it was made by Intel even before you opened it or looked at the label. The aluminum case seemed familiar.
The same enclosure (without yellow) is used by Dell and brobably many others.
great channel. been watching for years, always fun. TY
(The yellow) Looks like those Google/Dell Servers, for purpose?
Nope, it's just meant to make 3Par stand out visually, which it does. Also I highly doubt most Google on-premise appliances would require such storage anyway....
Looks a lot like the NETAPP systems I've worked with in the past.
A Dacia (or Yugo) and a Ferrari are using almost similar looking Parts to... ;-)
Cos it's using the same disk Chassis, but with some different modules \ software \ firmware etc etc :)
Many of these systems from that era were by Xilinx, who nailed a modular system, with good documented standards, at a great price......for several models in a row. They were somewhat displaced by the "SSD revolution" and it's need for NVMe backplanes etc.
HP firmware on the disks. Some of their systems will refuse to run a disk without it!
Never had problems with foreign disks on Proliant servers. You only need a genuine HP disk frame/carrier then it works.
Play with Junk really? I thought they check firmware
@@PlaywithJunk The newer Gen9 and Gen10's seem to have serious issues where they will spin up the fans to 100% when a non-HPE disk is connected... Older servers do the same but to 66% when non-HPE PCIe cards are inserted (DL380 G7 and DL380p Gen8 at least do this) and the MSA60 just outright refused older HPE branded disks such as their Maxtor disks. They were HPE branded but refuse to function within an MSA60 attached to P812 or P800, poke those disks in the server itself but on the exact same controller and they work just fine.
It seems HPE just does it randomly...
@@someguy4915 Ditto this - Whilst older Proliants aren't so picky about disks, you can be damn sure that these storage systems will log an error with HP the minute an unknown is connected. Heck, NetApps won't accept a disk until it's been able to upgrade the disk firmware to the latest level - a process which happens automatically when the disk is inserted. The customer is expected to maintain an up to date "qualification list" of disks that are accepted by the system. On occasion, disks with unacceptably high failure rates are "de-qualified" - meaning that the system will report them as "potential risks" and the vendor will send replacements...purely based on their experience with various models.
@@PlaywithJunk Old post i know, but 3PARs ONLY accept known disks, and all disks have custom firmware, not even regular HP/E disks works in 3PAR storage systems - i talk experience :)
What an awesome job.
Using an electric screwdriver - good man :)
Nice back to the future reference
I need one of these for my Plex server.
3:00 And ppl so you know so is SAS drives NOT the same as SATA (but I think they are backwards compatible but not the other way around) and especially enterprise ones which is pretty much the only thing they are ment for. The speed of these drives its faster and almost double the MTTF and can be on 24/7 compared to ordinary consumer drives and no matter the speed they might have. You simple pay what you get.
SAS drives are normally built to work 24x7 while SATA drives are designed for private PC use. As a rule of thumb you can say that a more expensive SAS controller can handle SAS and SATA drives while a cheap SATA controller can't handle SAS drives.
Some drives are available with SAS or SATA interfaces, so the difference is only the interface chip. I'm not sure where the price difference comes from it that case... ;-)
@@PlaywithJunk I have once seen SAS drives that was not enterprise ones, thus ordinary hardrives. :D But for real SAS Enterprise drives you really get what you pay for though and as you say on 24/7 all day long.
I have a little old small server my self that I got from my boss a year ago, a DELL PowerEdge T430 dual cpu 64GB mem but only 5 300GB SAS harddrives in it so it can't really be used for much ordinary things. Its only fun though its not near used for what it was built for.
Enterprise hard drives cost so much, even used, so its not really an option to upgrade the storage in it. Sure I can put in a 4TB SATA but I will not gain anything at all on that in speed. The SAS will be much faster and just wait for the SATA to be done all ready.
6Gs for an SSD that was formated differently??
6Gs for an SSD that will work with the system and if it ever fails HP comes and replaces it within 4 hours.
Some Guy yeah that's what you pay for. The service speed
If you dont wanna play, go build it yourself on newegg.. you're on your own when it dies tho. Sort of a bummer when you have a few thousand of them
You need redundancy, spare parts, and also waranty is a thing
@@someguy4915 Yes, but then you pay the k$ every month for the support contract... (And you need it. Our 8200 seems to break a disk every few weeks.)
Why doNOT they make it with M.2 so uses less energy and add Ram to it as Cache so it can be super small and super faster
They probably do and probably charge more for it. SuperMicro has some new all NVMe M.2 slot 1U servers that are really nice. Probably way cheaper. But the price of this isn't just hardware, but software & support.
M.2 is just a connector and so will not use any energy, much like the SAS connection itself does not use power... You can do Sata (6Gb/s) over M.2 which will be slower than these disks' SAS (12Gb/s dual port) or NVME which is limited to how many disks can be used and is only 1 connection per SSD so no redundancy there...
M.2 NVME SSD's will actually use much more power than Sata/SAS, it's a big problem with such drives that they overheat and start throttling within minutes... This system is 24/7 and is capable of 100% load 24/7 (not recommended but possible) so NVME would use an enormous amount of additional power, cooling and cost for no profit...
Anyway if you have two shelves with 24 SSD's each you get 160+ Gb/s anyway with so much IOPS that the individual connection doesn't matter...
All those SAS drives in RAID 0 = insane speed!!
As far as I remember I was able to access random 8kb block within Oracle database with access time of 1ms. That was measured from the perspective of a programmer, not sys admin.
@@jaroslaww7764 I have seen way faster speeds than that.
@@dtiydr Good for you! As I said I was only a programmer who had to deal with some Oracle database to achieve
@@jaroslaww7764 Ah yes, and database is normally not extremely fast (there are exceptions of course) and slower than a RAID 0 of several SAS disks. But 1 ms is not bad for that I might say anyway, cool.
@@dtiydr at the time I did it it was still the time of monolith applications in the company I worked for. You know, monolith web app written in Java plus big Oracle database plus such a storage device (now I think it was something from EMC). I had to optimize the time it took to present to the user the most used screen of app. Knowing that it takes 1ms for the retrieval of 1 block I knew I'm only able to access 1k blocks. Now, what I do are microservices and I would create separate database kept mostly in memory to serve same purpose and not to have to care about the access time.
why not speed benchmarks :D
Dang i woukd love to have one of those ssd
What would you do with it?
I cannot imagine a scenario where I would need one so I do not want one.
Imho, with SAN array when you pay $6k for 1TB SSD it is like you pay $200 for hardware (similar OEM SAS SSD) and the rest for SAN software (allowance to run SAN software on SSD of this particular size). Just like for Oracle DB Enterprise you pay $57k for one CPU license.
I would say $500 for the SSD and $5500 for the firmware branding and the sticker with HP logo.
Does it mean software itself costs flat, whether you run 6TB or 1000TB config? With SDS price usually increases with space usable (Ontap Select, ScaleIO, HPE VSA, Nexenta).
14:35 I want to see a graphics card in there and someone playing games on a storage controller. :D
If you find someone who re-writes the operating system... why not. There is even a PCI-e slot.
Whys the ssd so expensive
because of the HP sticker?
Play with Junk
Oh, you mean Highly Problematic Entreprise sticker??
Maybe
@@beedslolkuntus2070 :-) that was a good one!
Play with Junk
(: I promise! Stupid Printers and servers
there's just nobody makes any 3.5 or 5.25 inch size SSD!
arstechnica.com/gadgets/2016/08/seagate-unveils-60tb-ssd-the-worlds-largest-hard-drive/
@@dmitriyvassilyev5849 check
@@dmitriyvassilyev5849 OMG they are large
Marty did not say china ...he says japan...
I know.... I did this on purpose :-)
Those disks are expensive because unlike in your home laptop they are tested for continuous workload
But not that eypensive. You can get exactly the same drive from the original manufacturer for a fraction of the price. If it's not 3PAR branded... :-)
damn HP stuff interesting
Only fortune 500 companies and government departments can afford this!
it's $150'000, not $150'000'000
Not really. Company I worked for had one, and the company was quite big, but not comparable to ones of F500.
Unhappy with some of the accuracy here - yes, the system will attempt to cleanly shutdown a controller in a powerloss, but there most definitely IS a shutdown procedure. Enterprise SAS disks go through extensive extra testing prior to qualification - not only do SAS disk mechanisms get tested more aggressively by manufacturers (who will specifically rate them at far narrower tolerances than SATA disks, as well as usually guaranteeing them for longer, and supporting harder use) but the controllers have much smarter firmware, generally understanding and advertising failure mode behaviours much earlier, and measuring a lot more disk data than "SMART" gives you on consumer disks. Finally, the disks themselves go through additional testing by the system vendor, who's controller firmware will react appropriately to specific disk behaviour. Furthermore, a failed SAS disk will be replaced by a storage system vendor at their cost in 4hrs - or they pay a contractual breach penalty. These systems host millions (or billions) of dollars of data that can't afford to be unavailble - to the extent that RAID purely exists to allow an online spare to replace a failing disk. You then expect the failure to be replaced inside 4hrs to become the spare. So you're paying for a fair bit more than a bracket and a sticker. Disks can, and HAVE been re-read after a format. It's not simple, but if you have ultra-confidential data on there (maybe government, military, or significant financial) people can (and have) rescued scrapped, or even failed disks and recovered data from them. There is a reason standards like PCI-DSS explicitly require encryption at rest. SSDs with adequate airflow do not require heatsinks - however they DO benefit from significantly different wear-levelling algorithms when used in storage systems - stuff like storing journal blocks and data blocks in seperate disk areas is critical to their availablilty, as well as performance. Whilst I appreciate the need to simplify this complex technology, your summary seems overly reductionist. Yes, these systems ARE overpriced - but mainly because the value of them is not in the hardware, but in the service they provide....especially when it's to offer the lowest latency and highest throughput possible with the maximum of availability. Try doing synchronous disk writes across two computers and do some performance testing, and you will pretty quickly see where your money goes.
Glad to see it wasn't just me, there have been a couple times hardware and prices where mentioned throughout other videos and I thought to myself no that is certainly not correct, the suppliers do overcharge for the hardware relating to what you mentioned about replacements etc but I did think to myself the prices can not be for the hardware alone if you where to buy it off the shelf with no service warranty etc then it would be significantly cheaper. None the less I do enjoy PWJ videos as it is interesting to see some of the stuff however most of it I have seen before through working in a data center !
That's some hardcore enterprise storage. You can buy a house for the cost of that unit.
Would you believe it's "Mid Tier" - the really expensive stuff is totally nuts!
Good Stuff
My dream is to be on a datacenter just looking and disassembling and decommissioning these machines on daily basis...
Well... open a company that specializes for IT hardware recycling and data erasing/destroying. This is a service that is needed all over the world. Every disk that goes to scrap probably has sensitive data on it. Customers don't have the time and equipment to take care themself.
And if you recycle properly, you also get money from scrap metals and cables.
And you're doing something good for the environment...
👍👍👍👍👍
Interesting. Thanks.
20:27 We cant be friends - router belongs to .1 and not .254... ^^
I use .254 for gateway on most networks. I have used .1 for some. I have seen it go both ways in different environments. It's not a big deal, and I don't get bent out of shape too much about it.
Router belongs anywhere where my DHCP server shit lease to :D :D :D
Computer crap is mindbogglingly expensive
Sexy piece of equipment! Also nice screwdriver, it looks like an electronic cigarette mod! Don't vape it 😀👍
ES120 screwdriver.... google that :-)
That screwdriver definitely knows the way.
Most likely not bought for your self.
You forgot to show us the spy chip China puts in each server.
It's the same chip the USA uses too :-)
These are not junk at all (unlike modern consumer SSDs, wich are total sh..t).
Definitely not worth the $6000.
I agree..... but there is not much you can do when you have such a system :-)
@@PlaywithJunk Can they be used with regular drives? Regular enterprise drives can be formatted to different sector sizes like 520 and 528 bytes. I had some 520 ones that I've formatted to regular 512 byte sectors and they worked normally in a normal server.
No. We tried that, but just reformatting with 520 blocks does not work. The 3PAR wants 3par drives. It checks the firmware and if it sees a foreign drive, it refuses to take it in.
@@PlaywithJunk Well that's a shame for such an expensive system... Ah well...