How about a $6000 SSD drive? HP 3PAR 7200 - (PWJ85)

Поделиться
HTML-код
  • Опубликовано: 26 окт 2024

Комментарии • 175

  • @Primetime94
    @Primetime94 5 лет назад +2

    Regarding 17:52, you can shut down a 3PAR controller via the CLI using the "shutdownnode halt" command. Or, if you want to shut down the whole 3PAR, you can use the "shutdownsys halt" command. You can also shut it down via the service processor menus. You are also correct that you can cut off power to the PSUs to initiate a graceful shutdown.

  • @douro20
    @douro20 6 лет назад +12

    Yes, that SSD is a Hitachi one. More specifically it is an Ultrastar SSD1000MR. Many of the earlier Hitachi enterprise SSDs were made by Intel.

    • @matthewghali2987
      @matthewghali2987 6 лет назад +1

      douro20 instead of additional space for a larger SKU, they might have pulled packages out until they met power/heat limits (perhaps their design was overly optimistic, or the chips performed worse than expected)

    • @leexgx
      @leexgx 5 лет назад

      the chip difference is due for haveing 2 drives, a read intensive ssd (normal 9-12% op, write IOPS speed is limited to about 30% to maintain low qos access times under writes)
      and a mixed high read/write ssd but still maintain low qos latency times under high read and write loads (Norm Have 25% OP so be about 15% more flash space, drive norm stays the same size) norm more ££ as tend to use larger nand chip (512 vs 256 for example, so you actually find read intensive use all pads and mixed read/write use less pads because each nand chip is 2x larger on mixed read/write ssds)

    • @peterpain6625
      @peterpain6625 5 лет назад +1

      @@leexgx Also the 920GB is probably because of the 520byte format HP used for some odd reason ;)

  • @scarakus
    @scarakus 6 лет назад +5

    That's a nice little screw driver...
    I can remember when all the good electronics were made in Japan...

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +4

      Yeah that was about in 1985... :-)
      And I remember how people laughed in the 1970's about Japan products and japanese cars. The same situation as today with China. Maybe we will drive chinese cars soon and the will probably be not so bad. Ans all electronics cones from China, iPhone, Samsung, HP servers, Dell....

    • @blunator
      @blunator 6 лет назад

      Where you buy this little screwdriver? Nice

    • @blunator
      @blunator 6 лет назад

      .... i found 😎

    • @Harry11315
      @Harry11315 5 лет назад

      still true, chinese things most often knock offs still.

  • @zvpunry1971
    @zvpunry1971 6 лет назад +6

    Why did they return it? They probably learned what matters. ;)
    1.) If you need a lot of cheap storage (i.e. backup): Build or buy a box with a lot of cheap drives.
    2.) If you need low latency and many I/O ops per sec: Directly attached storage tops everything that adds additional complexity between the application and the storage.
    3.) If you need redundancy: Build a redundant application where you reduce the impact of a failing node. Let the application use a redundant Database that runs on something I described in point 2 that is backed up to something described in point 1.
    With the saved $150'000 you can come really far in terms of hardware. But what I said depends on another factor, you need qualified system engineers with enough time (not bound to other projects) to build the own specialized system. If you are a small company and don't have these resources, you may be forced to buy such a solution. And whenever a disk fails, you are forced to go back to the the original vendor. And if you want to get new features that come available with of the shelf hardware, you need to pay extra or buy a whole new system. And while 1TB SSDs become cheaper and faster, you still need to buy the expensive ones from the vendor of that system.
    Anyhow, this is only my opinion and as always, what someone really needs, depends on what he tries to achieve (or archive, in case of storage systems). ;)

    • @someguy4915
      @someguy4915 6 лет назад +1

      While DAS has lower latency it is also local... Storage on 1 server cannot be accessed by another (unless you do some iSCSI or whatever but that would be slower than the 3Par system...)
      So you can spend $50K on an all SSD DAS for server 1, another for the next 2 servers and then you'll need to buy some high speed links and a very costly switch to sync data accross servers...
      This is not a system you buy if you have a single server... At that point scalability with hobby projects falls short and you need a SAN, especially when you need shared storage for virtualization which will not work with DAS. So in the end this solution will be cheaper and easier than building your own... Add to that the fact you always buy support with these systems so that when something breaks (disk, shelf, controller, cabling, software, firmware, whatever) you ring up HPE and tell them to deal with it. Or you setup the agent software which will do that for you. Especially larger companies will use such solutions. Facebook for instance uses their own server format, but still has an HPE storage system much like the 3Par for storage as it's cheaper, easier and scales much better.

    • @0SteveBristow
      @0SteveBristow 5 лет назад

      Yes - in an ideal world. But actually, if you have multiple applications that need to be re-engineered in this manner, and aren't a devops house that can "agile harder" to get these projects done quickly, it's actually cheaper to spend some CapEx (complete with tax relief) on a lump of hardware and depreciate it over 5 years - the run costs are inevitably far cheaper than the team of ultra-amazing engineers that you need to run multiple highly available applications all built on different codebases, languages, app-stacks and architectures. Especially as they get bored easily and do other work.....or leave for somewhere more interesting. Infrastructure investment is rarely the right answer - but it's often a quick and cheap answer.

  • @jaroslaww7764
    @jaroslaww7764 3 года назад

    Once upon a time a company I worked for that time had one in their server room, along with some EMC stuff. Have seen it only once in my lifetime

  • @eliotmansfield
    @eliotmansfield 6 лет назад +5

    Looks like a Xyratex chassis, as used in equallogic,netapp,storesimple etc

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +4

      You have good eyes! The chassis is indeed made by Xyratex. I found this information in the startup log.

  • @AmitChaudhry27
    @AmitChaudhry27 6 лет назад +1

    Dayum that hardware is just amazing awesome video

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +1

      Thank you. I'm preparing a new packet for you..... some drives that will hopefully survive transport this time.

    • @AmitChaudhry27
      @AmitChaudhry27 6 лет назад

      Play with Junk thank man love you

  • @SaberusTerras
    @SaberusTerras 6 лет назад

    I read about the battery-backed PSU a couple/few years back. Nice to see one and how it's set up. The latest ones can fit into the PSU slots of Proliant servers, with a cable-add option to a dedicated rack-mount UPS. Except it's only sparsely mentioned in the datasheets, and it was hard to find out what I did. (This was a few months back)

    • @KSSilenceAU
      @KSSilenceAU 6 лет назад

      Supermicro has some Battery backed hotswap PSU's for their servers as well, with integrated Li-Ion Batteries (around 60wHr capacity i believe), and considering you can have 2 of them in a machine and they can last 5 minutes or so at full load each, a lightly loaded server with 2 of them actually has quite decent endurance on battery power.
      the supermicro ones just slot in like a normal PSU and are plug and play, fully self contained units meaning all they need is the AC cable on the back, and a chassis with the correct PSU slot. Battery status is reported via the PMBUS that PSU's use to report status. Quite nifty actually to have a 1kW PSU with built in UPS.

    • @SaberusTerras
      @SaberusTerras 6 лет назад

      Nice to hear it's getting more wide-spread adoption. Do the SM models also have an option to connect into a larger UPS other than through the power cable?

    • @KSSilenceAU
      @KSSilenceAU 6 лет назад

      Nah, other than your run of the mill UPS options with AC PSU's, 48vDC PSU's and the internal battery in the PSU's, those are your options.
      the 48vDC PSU's used with some creativity might give you some viable options where AC UPS's won't work and the internal battery in AC PSU's isn't sufficient.

  • @adamsaint2890
    @adamsaint2890 5 лет назад +10

    Must have a really big "video" collection. lol

  • @MicheIIePucca
    @MicheIIePucca 6 лет назад

    Most of our SAN infrastructure is 3PAR, or XP9500 which will be migrated to newer 3PAR. I hear much of the higher tiers of storage will be SSD now, instead of FC-SCSI.. I'll be curious to see how much the performance goes up.

  • @Sams911
    @Sams911 3 года назад +1

    the pattern of the thermal pads on the SSD when you first opened it lines up with the back side of the first board, not the front side that was in contact with the heatsinks... that makes no sense???

    • @PlaywithJunk
      @PlaywithJunk  3 года назад +1

      They applied thermal pads to all chips and also to the positions where the chips were not installed. You can see that the PCB is prepared for more chips.

  • @alexlol398
    @alexlol398 5 лет назад +1

    Oh... And what did You do with taht HDDs? That kind of SAS HDD are VERY expensive in our country...

  • @MikeHarris1984
    @MikeHarris1984 6 лет назад +6

    Its not a standard disk. Its an enterprise grade disk that is designed to run 24/7/365 non stop. High hour usage and data read/write. Totally different then a consumer grade disk. Also you are paying for the maintenance support on the drives as well. They have a phone home feature, that if a disk is failing, it will automatically phone HP with all info and HP will send a new one to arrive within 4 hours of drive report failure.

  • @matthewghali2987
    @matthewghali2987 6 лет назад +2

    I wonder if they made the management app look similar to VMware's on purpose, to make IT guys feel more comfortable

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +6

      I thought that it looks very much like like EMC Navisphere.
      But this is perhaps something like industry standard today. Or the software comes from the same programmers in India :-)

    • @NIB2307
      @NIB2307 6 лет назад

      the new management app is called SSMC (StoreServ Management Console), based on HTML5, and looks much prettier than the old one in this video.

  • @tech-utuber2219
    @tech-utuber2219 5 лет назад +14

    "We got it back from a customer that didn't need it anymore, I don't know why..."
    AWS is the reason.

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад +4

      Amazon Web Services? I don't think so...

    • @MikaelLevoniemi
      @MikaelLevoniemi 5 лет назад +7

      Not on this scale of operation, AWS becomes crazy expensive quite fast.

    • @0SteveBristow
      @0SteveBristow 5 лет назад +2

      @@MikaelLevoniemi true, but if availability is a primary concern, and your workload can be scaled horizontally, it can work out cheaper. If your workload scales, and is very very bursty, AWS is definitely cheaper than having expensive tin sitting idle in a datacenter for 50% of its lifecycle. Equally, of course, the whole system may simply have been consolidated onto a bigger SAN with a shelf of modern 15TB SSDs that outperform this, as well has having greater storage density - the powersavings on those over spinning disks pretty can often self-finance after 24 months, if you're paying the electricity bill :)

    • @MikaelLevoniemi
      @MikaelLevoniemi 5 лет назад +2

      @@0SteveBristow AWS is just fine if you spend less than a million a year on your infra. After that it becomes cheaper to employ admins and techs to install and spread around your own private cloud infra. Not as much bells and whistles, but more freedom to build your own tools.
      No point in using aws for thousands of servers with petabytes of data.

    • @0SteveBristow
      @0SteveBristow 5 лет назад +1

      @@MikaelLevoniemi I don't disagree that you have to study your use case carefully: but having come from a business with an 18m annual budget, they were still able to close two entire datacenters by moving infrequent batch processing to the cloud : Azure in this instance, but the maths is much the same: the azure 'low priority' tier makes a huge amount of sense for the monthly conaolidation job, which runs for 3 days a month with several hundred multi-core servers. Likewise, when they had their annual customer rush, they scaled their app and web tier heavily into AWS. This lasted maybe 14days. Scaling into public cloud saved them the run costs of some 20 fully populated racks (conclude what you will about VM density). I agree, public cloud is often a very bad answer, and needs to be very very carefully considered. However, simplifying this analysis down to 'over a million a year' or 'it's just really expensive' seems a little disingenuous. Public cloud IS closing datacenters, and in many cases (again: I agree some are making a huge mistake!) quite rightly. There are also use cases for DR that entirely depend which country you're in as to their financial viability: companies with rack after rack sitting idle 'just in case' are able to avoid needlessly expensive tech refreshes by planning some cloud capacity for DR.
      Edit to add- thank you for an intelligent debate on this topic : it comes up in our industry often and few seem to have given it much intelligent analysis.

  • @TedBackus
    @TedBackus 6 лет назад

    great channel. been watching for years, always fun. TY

  • @alexandrumaran1184
    @alexandrumaran1184 5 лет назад +1

    I like the yellow colors

  • @mikecawood
    @mikecawood 5 лет назад +2

    Using an electric screwdriver - good man :)

  • @douro20
    @douro20 6 лет назад +3

    HP firmware on the disks. Some of their systems will refuse to run a disk without it!

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +5

      Never had problems with foreign disks on Proliant servers. You only need a genuine HP disk frame/carrier then it works.

    • @SolidSnakeSK
      @SolidSnakeSK 6 лет назад +1

      Play with Junk really? I thought they check firmware

    • @someguy4915
      @someguy4915 6 лет назад

      @@PlaywithJunk The newer Gen9 and Gen10's seem to have serious issues where they will spin up the fans to 100% when a non-HPE disk is connected... Older servers do the same but to 66% when non-HPE PCIe cards are inserted (DL380 G7 and DL380p Gen8 at least do this) and the MSA60 just outright refused older HPE branded disks such as their Maxtor disks. They were HPE branded but refuse to function within an MSA60 attached to P812 or P800, poke those disks in the server itself but on the exact same controller and they work just fine.
      It seems HPE just does it randomly...

    • @0SteveBristow
      @0SteveBristow 5 лет назад

      @@someguy4915 Ditto this - Whilst older Proliants aren't so picky about disks, you can be damn sure that these storage systems will log an error with HP the minute an unknown is connected. Heck, NetApps won't accept a disk until it's been able to upgrade the disk firmware to the latest level - a process which happens automatically when the disk is inserted. The customer is expected to maintain an up to date "qualification list" of disks that are accepted by the system. On occasion, disks with unacceptably high failure rates are "de-qualified" - meaning that the system will report them as "potential risks" and the vendor will send replacements...purely based on their experience with various models.

    • @HomelabExtreme
      @HomelabExtreme 4 года назад

      ​@@PlaywithJunk Old post i know, but 3PARs ONLY accept known disks, and all disks have custom firmware, not even regular HP/E disks works in 3PAR storage systems - i talk experience :)

  • @Cooper3312000
    @Cooper3312000 6 лет назад

    What an awesome job.

  • @ztech6596
    @ztech6596 6 лет назад

    I thought it was made by Intel even before you opened it or looked at the label. The aluminum case seemed familiar.

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +2

      The same enclosure (without yellow) is used by Dell and brobably many others.

  • @l3p3
    @l3p3 5 лет назад +1

    13:14 64gb? My debian laptop has 32gb and I have a ton of apps on it.
    I wonder if these 64gb are actually required for the controller, maybe 4gb would have been enough but there are no 4gb ssd with a comparable read/write perfomance!
    Aaah! And of course, if a power failure happens, it can dump the memory to the ssd, so maybe that is another reason!

  • @elektrokinesis4150
    @elektrokinesis4150 3 года назад

    ok get this, that ssd is made by intel, for hitachi as indicated by the p/n, and resold by hp with their label on it, it went for a ride

    • @PlaywithJunk
      @PlaywithJunk  3 года назад

      ...then assembled in Poland, shipped to Germany, then to Switzerland.

  • @Dust599
    @Dust599 6 лет назад

    Nice back to the future reference

  • @elkostasecuritysystemsindi989
    @elkostasecuritysystemsindi989 3 года назад

    hi,
    we have HP3Par 7000 with 32 Nos, 4TB Sata storage.its giving too noice ,pls provide the solution.

    • @PlaywithJunk
      @PlaywithJunk  3 года назад

      First tell me what the problem is.... is it too noisy? Only 4 TB? Did you mean PB...?

  • @Agakir
    @Agakir 3 года назад +1

    True SAS (Serial Attached SCSI) drive is not IDE or SATA, however SATA can work in SAS servers. Standart SAS it was the fastest (at its times) 10000 rpm in RAID configuration excellent with low MTBF , while IDE/SATA offered only 7200 rpm and high failures.

    • @PlaywithJunk
      @PlaywithJunk  3 года назад +1

      There are also 15000rpm SAS drives.

    • @Agakir
      @Agakir 3 года назад

      @@PlaywithJunk I know but first historical consumer server edition SAS was 10000rpm while other standard drives ide/sata 7200rpm

  • @torquemada1971
    @torquemada1971 6 лет назад

    Sweet screwdriver. What wattage does that mod put out?

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +1

      It's a commercial product called ES120

  • @nowaymuller6643
    @nowaymuller6643 6 лет назад +4

    For how much do you sell the HDD I would need some?

  • @rabidbigdog
    @rabidbigdog 3 года назад

    Pretty sure they use VxWorks not Linux for the controllers?

    • @PlaywithJunk
      @PlaywithJunk  3 года назад +1

      yes but the base of it is some sort of UNIX/Linux...heavily modified of course

  • @timothyhall7606
    @timothyhall7606 4 года назад

    Where the heck did you get that screwdriver?....

    • @PlaywithJunk
      @PlaywithJunk  4 года назад +1

      it's called ES120 and can be bought at your favourite chinese shop

    • @timothyhall7606
      @timothyhall7606 4 года назад

      @@PlaywithJunk Where all the best toys come from. Thank you!

  • @MikeHarris1984
    @MikeHarris1984 6 лет назад

    Looks a lot like the NETAPP systems I've worked with in the past.

    • @mavo66
      @mavo66 5 лет назад

      A Dacia (or Yugo) and a Ferrari are using almost similar looking Parts to... ;-)

    • @0SteveBristow
      @0SteveBristow 5 лет назад

      Cos it's using the same disk Chassis, but with some different modules \ software \ firmware etc etc :)

    • @0SteveBristow
      @0SteveBristow 5 лет назад

      Many of these systems from that era were by Xilinx, who nailed a modular system, with good documented standards, at a great price......for several models in a row. They were somewhat displaced by the "SSD revolution" and it's need for NVMe backplanes etc.

  • @dtiydr
    @dtiydr 3 года назад

    All those SAS drives in RAID 0 = insane speed!!

    • @jaroslaww7764
      @jaroslaww7764 3 года назад

      As far as I remember I was able to access random 8kb block within Oracle database with access time of 1ms. That was measured from the perspective of a programmer, not sys admin.

    • @dtiydr
      @dtiydr 3 года назад

      @@jaroslaww7764 I have seen way faster speeds than that.

    • @jaroslaww7764
      @jaroslaww7764 3 года назад

      @@dtiydr Good for you! As I said I was only a programmer who had to deal with some Oracle database to achieve

    • @dtiydr
      @dtiydr 3 года назад

      @@jaroslaww7764 Ah yes, and database is normally not extremely fast (there are exceptions of course) and slower than a RAID 0 of several SAS disks. But 1 ms is not bad for that I might say anyway, cool.

    • @jaroslaww7764
      @jaroslaww7764 3 года назад

      @@dtiydr at the time I did it it was still the time of monolith applications in the company I worked for. You know, monolith web app written in Java plus big Oracle database plus such a storage device (now I think it was something from EMC). I had to optimize the time it took to present to the user the most used screen of app. Knowing that it takes 1ms for the retrieval of 1 block I knew I'm only able to access 1k blocks. Now, what I do are microservices and I would create separate database kept mostly in memory to serve same purpose and not to have to care about the access time.

  • @djangoryffel5135
    @djangoryffel5135 6 лет назад

    (The yellow) Looks like those Google/Dell Servers, for purpose?

    • @someguy4915
      @someguy4915 4 года назад +2

      Nope, it's just meant to make 3Par stand out visually, which it does. Also I highly doubt most Google on-premise appliances would require such storage anyway....

  • @JonoNZ110
    @JonoNZ110 6 лет назад +5

    What screwdriver is that? It's awesome!

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +4

      It is named ES120 and comes from china. Lots of videos about it....

    • @dimmog
      @dimmog 6 лет назад

      ebay.to/2EPl1HM

    • @reps
      @reps 6 лет назад +11

      Wouldn't recommend it, not powerful enough to disintegrate screws.

    • @Tangobaldy
      @Tangobaldy 6 лет назад

      Marco Reps that's a good point no stripping of screwheads

    • @VictorGarciaR
      @VictorGarciaR 6 лет назад +1

      Marco Reps
      I bet your homemade one is much better at taking stuff apart, plus it is cooler.
      :P

  • @hariranormal5584
    @hariranormal5584 3 года назад

    damn HP stuff interesting

  • @hubzcaps
    @hubzcaps 6 лет назад +1

    Dang i woukd love to have one of those ssd

    • @l3p3
      @l3p3 5 лет назад

      What would you do with it?
      I cannot imagine a scenario where I would need one so I do not want one.

  • @attilavidacs24
    @attilavidacs24 4 года назад

    I need one of these for my Plex server.

  • @l3p3
    @l3p3 5 лет назад

    14:35 I want to see a graphics card in there and someone playing games on a storage controller. :D

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад

      If you find someone who re-writes the operating system... why not. There is even a PCI-e slot.

  • @pupil6075
    @pupil6075 5 лет назад

    본 제품은 미국 또는 다른 국가에 등록되었거나 라이센스를 받은 히타치
    글로벌 스토리지 테크놀러지스사의 특허(들)하에 제조된 제품입니다.

  • @SteveMacSticky
    @SteveMacSticky 3 года назад

    What operating system it was using?

    • @PlaywithJunk
      @PlaywithJunk  3 года назад +1

      It uses the 3PAR OS..... based on some sort of linux.

    • @SteveMacSticky
      @SteveMacSticky 3 года назад

      @@PlaywithJunk thanks

  • @dbmaster46446
    @dbmaster46446 2 года назад

    i wonder whats on that internal SSH

    • @PlaywithJunk
      @PlaywithJunk  2 года назад

      SSH?

    • @dbmaster46446
      @dbmaster46446 2 года назад

      @@PlaywithJunk wups i ment SSD :D

    • @PlaywithJunk
      @PlaywithJunk  2 года назад

      @@dbmaster46446 I thought so... 🙂 On the SSD is the operating system of the controller. It's a Unix based OS . It is the boot drive of the controller.

    • @dbmaster46446
      @dbmaster46446 2 года назад

      @@PlaywithJunk but i was wondering if it is modifiable or bootable on a different system

  • @attilavidacs24
    @attilavidacs24 5 лет назад +5

    Only fortune 500 companies and government departments can afford this!

    • @Jaburu
      @Jaburu 5 лет назад +1

      it's $150'000, not $150'000'000

    • @jaroslaww7764
      @jaroslaww7764 3 года назад

      Not really. Company I worked for had one, and the company was quite big, but not comparable to ones of F500.

  • @station240
    @station240 6 лет назад +1

    If you switch off the PSU, to pull it out, the controller uses the battery power to save data to a disk and shuts down. But if the reason for pulling the PSU is because the battery is totally dead, then what ? Also seems a bit of a bad idea to drain the battery before removal, in case the 1:1,000,000 chance the power fails soon after refitting the PSU.
    Is there no option for the controllers to have A+B PSUs, so there is an option to hotswap PSUs without shutting down controllers ?

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад +2

      There are two power supplies with two batteries. If one is faulty the other will still work. The batteries are checked for health regularly and if one is not good, you get a message to replace it.
      I don't know how this system handles the batteries but Proliant servers check the battery at every startup and regularly when running. If something is wrong with the battery the cache memory will be disabled until the battery is fixed. So no danger at all... System runs a bit slower but it runs.

  • @attilavidacs24
    @attilavidacs24 6 лет назад +1

    That's some hardcore enterprise storage. You can buy a house for the cost of that unit.

    • @0SteveBristow
      @0SteveBristow 5 лет назад

      Would you believe it's "Mid Tier" - the really expensive stuff is totally nuts!

  • @0SteveBristow
    @0SteveBristow 5 лет назад +3

    Unhappy with some of the accuracy here - yes, the system will attempt to cleanly shutdown a controller in a powerloss, but there most definitely IS a shutdown procedure. Enterprise SAS disks go through extensive extra testing prior to qualification - not only do SAS disk mechanisms get tested more aggressively by manufacturers (who will specifically rate them at far narrower tolerances than SATA disks, as well as usually guaranteeing them for longer, and supporting harder use) but the controllers have much smarter firmware, generally understanding and advertising failure mode behaviours much earlier, and measuring a lot more disk data than "SMART" gives you on consumer disks. Finally, the disks themselves go through additional testing by the system vendor, who's controller firmware will react appropriately to specific disk behaviour. Furthermore, a failed SAS disk will be replaced by a storage system vendor at their cost in 4hrs - or they pay a contractual breach penalty. These systems host millions (or billions) of dollars of data that can't afford to be unavailble - to the extent that RAID purely exists to allow an online spare to replace a failing disk. You then expect the failure to be replaced inside 4hrs to become the spare. So you're paying for a fair bit more than a bracket and a sticker. Disks can, and HAVE been re-read after a format. It's not simple, but if you have ultra-confidential data on there (maybe government, military, or significant financial) people can (and have) rescued scrapped, or even failed disks and recovered data from them. There is a reason standards like PCI-DSS explicitly require encryption at rest. SSDs with adequate airflow do not require heatsinks - however they DO benefit from significantly different wear-levelling algorithms when used in storage systems - stuff like storing journal blocks and data blocks in seperate disk areas is critical to their availablilty, as well as performance. Whilst I appreciate the need to simplify this complex technology, your summary seems overly reductionist. Yes, these systems ARE overpriced - but mainly because the value of them is not in the hardware, but in the service they provide....especially when it's to offer the lowest latency and highest throughput possible with the maximum of availability. Try doing synchronous disk writes across two computers and do some performance testing, and you will pretty quickly see where your money goes.

    • @JtagSheep
      @JtagSheep 5 лет назад +1

      Glad to see it wasn't just me, there have been a couple times hardware and prices where mentioned throughout other videos and I thought to myself no that is certainly not correct, the suppliers do overcharge for the hardware relating to what you mentioned about replacements etc but I did think to myself the prices can not be for the hardware alone if you where to buy it off the shelf with no service warranty etc then it would be significantly cheaper. None the less I do enjoy PWJ videos as it is interesting to see some of the stuff however most of it I have seen before through working in a data center !

  • @gizmoriderfulye8007
    @gizmoriderfulye8007 5 лет назад

    why not speed benchmarks :D

  • @dmitriyvassilyev5849
    @dmitriyvassilyev5849 5 лет назад

    Imho, with SAN array when you pay $6k for 1TB SSD it is like you pay $200 for hardware (similar OEM SAS SSD) and the rest for SAN software (allowance to run SAN software on SSD of this particular size). Just like for Oracle DB Enterprise you pay $57k for one CPU license.

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад

      I would say $500 for the SSD and $5500 for the firmware branding and the sticker with HP logo.

    • @dmitriyvassilyev5849
      @dmitriyvassilyev5849 5 лет назад

      Does it mean software itself costs flat, whether you run 6TB or 1000TB config? With SDS price usually increases with space usable (Ontap Select, ScaleIO, HPE VSA, Nexenta).

  • @SproutyPottedPlant
    @SproutyPottedPlant 6 лет назад

    Sexy piece of equipment! Also nice screwdriver, it looks like an electronic cigarette mod! Don't vape it 😀👍

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад

      ES120 screwdriver.... google that :-)

    • @stonent
      @stonent 6 лет назад +1

      That screwdriver definitely knows the way.

  • @emmanuelnfila5744
    @emmanuelnfila5744 6 лет назад

    Good Stuff

  • @electronicbob6237
    @electronicbob6237 6 лет назад +4

    Marty did not say china ...he says japan...

    • @PlaywithJunk
      @PlaywithJunk  6 лет назад

      I know.... I did this on purpose :-)

  • @thecriss88
    @thecriss88 5 лет назад

    Those disks are expensive because unlike in your home laptop they are tested for continuous workload

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад +1

      But not that eypensive. You can get exactly the same drive from the original manufacturer for a fraction of the price. If it's not 3PAR branded... :-)

  • @dtiydr
    @dtiydr 3 года назад

    3:00 And ppl so you know so is SAS drives NOT the same as SATA (but I think they are backwards compatible but not the other way around) and especially enterprise ones which is pretty much the only thing they are ment for. The speed of these drives its faster and almost double the MTTF and can be on 24/7 compared to ordinary consumer drives and no matter the speed they might have. You simple pay what you get.

    • @PlaywithJunk
      @PlaywithJunk  3 года назад

      SAS drives are normally built to work 24x7 while SATA drives are designed for private PC use. As a rule of thumb you can say that a more expensive SAS controller can handle SAS and SATA drives while a cheap SATA controller can't handle SAS drives.
      Some drives are available with SAS or SATA interfaces, so the difference is only the interface chip. I'm not sure where the price difference comes from it that case... ;-)

    • @dtiydr
      @dtiydr 3 года назад

      @@PlaywithJunk I have once seen SAS drives that was not enterprise ones, thus ordinary hardrives. :D But for real SAS Enterprise drives you really get what you pay for though and as you say on 24/7 all day long.
      I have a little old small server my self that I got from my boss a year ago, a DELL PowerEdge T430 dual cpu 64GB mem but only 5 300GB SAS harddrives in it so it can't really be used for much ordinary things. Its only fun though its not near used for what it was built for.
      Enterprise hard drives cost so much, even used, so its not really an option to upgrade the storage in it. Sure I can put in a 4TB SATA but I will not gain anything at all on that in speed. The SAS will be much faster and just wait for the SATA to be done all ready.

  • @liliwinnt6
    @liliwinnt6 5 лет назад +1

    there's just nobody makes any 3.5 or 5.25 inch size SSD!

    • @dmitriyvassilyev5849
      @dmitriyvassilyev5849 5 лет назад

      arstechnica.com/gadgets/2016/08/seagate-unveils-60tb-ssd-the-worlds-largest-hard-drive/

    • @liliwinnt6
      @liliwinnt6 5 лет назад

      @@dmitriyvassilyev5849 check

    • @liliwinnt6
      @liliwinnt6 5 лет назад

      @@dmitriyvassilyev5849 OMG they are large

  • @b87b84
    @b87b84 3 года назад

    My dream is to be on a datacenter just looking and disassembling and decommissioning these machines on daily basis...

    • @PlaywithJunk
      @PlaywithJunk  3 года назад

      Well... open a company that specializes for IT hardware recycling and data erasing/destroying. This is a service that is needed all over the world. Every disk that goes to scrap probably has sensitive data on it. Customers don't have the time and equipment to take care themself.
      And if you recycle properly, you also get money from scrap metals and cables.
      And you're doing something good for the environment...

  • @profzen1
    @profzen1 6 лет назад

    Interesting. Thanks.

  • @mikeschurai7220
    @mikeschurai7220 5 лет назад

    👍👍👍👍👍

  • @binarybear9711
    @binarybear9711 5 лет назад +4

    20:27 We cant be friends - router belongs to .1 and not .254... ^^

    • @WarrenGarabrandt
      @WarrenGarabrandt 5 лет назад +1

      I use .254 for gateway on most networks. I have used .1 for some. I have seen it go both ways in different environments. It's not a big deal, and I don't get bent out of shape too much about it.

    • @Phobeuscz
      @Phobeuscz 5 лет назад +1

      Router belongs anywhere where my DHCP server shit lease to :D :D :D

  • @frankynakamoto2308
    @frankynakamoto2308 6 лет назад

    Why doNOT they make it with M.2 so uses less energy and add Ram to it as Cache so it can be super small and super faster

    • @CheapSushi
      @CheapSushi 6 лет назад +1

      They probably do and probably charge more for it. SuperMicro has some new all NVMe M.2 slot 1U servers that are really nice. Probably way cheaper. But the price of this isn't just hardware, but software & support.

    • @someguy4915
      @someguy4915 6 лет назад +1

      M.2 is just a connector and so will not use any energy, much like the SAS connection itself does not use power... You can do Sata (6Gb/s) over M.2 which will be slower than these disks' SAS (12Gb/s dual port) or NVME which is limited to how many disks can be used and is only 1 connection per SSD so no redundancy there...
      M.2 NVME SSD's will actually use much more power than Sata/SAS, it's a big problem with such drives that they overheat and start throttling within minutes... This system is 24/7 and is capable of 100% load 24/7 (not recommended but possible) so NVME would use an enormous amount of additional power, cooling and cost for no profit...
      Anyway if you have two shelves with 24 SSD's each you get 160+ Gb/s anyway with so much IOPS that the individual connection doesn't matter...

  • @id104335409
    @id104335409 6 лет назад +1

    6Gs for an SSD that was formated differently??

    • @someguy4915
      @someguy4915 6 лет назад +5

      6Gs for an SSD that will work with the system and if it ever fails HP comes and replaces it within 4 hours.

    • @Tangobaldy
      @Tangobaldy 6 лет назад +1

      Some Guy yeah that's what you pay for. The service speed

    • @matthewghali2987
      @matthewghali2987 6 лет назад

      If you dont wanna play, go build it yourself on newegg.. you're on your own when it dies tho. Sort of a bummer when you have a few thousand of them

    • @lowstaar
      @lowstaar 6 лет назад +1

      You need redundancy, spare parts, and also waranty is a thing

    • @henninghoefer
      @henninghoefer 4 года назад

      @@someguy4915 Yes, but then you pay the k$ every month for the support contract... (And you need it. Our 8200 seems to break a disk every few weeks.)

  • @relaxingnature2617
    @relaxingnature2617 Год назад

    Computer crap is mindbogglingly expensive

  • @beedslolkuntus2070
    @beedslolkuntus2070 4 года назад

    Whys the ssd so expensive

    • @PlaywithJunk
      @PlaywithJunk  4 года назад +1

      because of the HP sticker?

    • @beedslolkuntus2070
      @beedslolkuntus2070 4 года назад

      Play with Junk
      Oh, you mean Highly Problematic Entreprise sticker??
      Maybe

    • @PlaywithJunk
      @PlaywithJunk  4 года назад

      @@beedslolkuntus2070 :-) that was a good one!

    • @beedslolkuntus2070
      @beedslolkuntus2070 4 года назад

      Play with Junk
      (: I promise! Stupid Printers and servers

  • @dtiydr
    @dtiydr 3 года назад

    Most likely not bought for your self.

  • @lijie6431
    @lijie6431 5 лет назад +2

    You forgot to show us the spy chip China puts in each server.

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад +3

      It's the same chip the USA uses too :-)

  • @RBSVader
    @RBSVader 5 лет назад

    These are not junk at all (unlike modern consumer SSDs, wich are total sh..t).

  • @iprot00
    @iprot00 5 лет назад

    Definitely not worth the $6000.

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад

      I agree..... but there is not much you can do when you have such a system :-)

    • @iprot00
      @iprot00 5 лет назад

      @@PlaywithJunk Can they be used with regular drives? Regular enterprise drives can be formatted to different sector sizes like 520 and 528 bytes. I had some 520 ones that I've formatted to regular 512 byte sectors and they worked normally in a normal server.

    • @PlaywithJunk
      @PlaywithJunk  5 лет назад

      No. We tried that, but just reformatting with 520 blocks does not work. The 3PAR wants 3par drives. It checks the firmware and if it sees a foreign drive, it refuses to take it in.

    • @iprot00
      @iprot00 5 лет назад

      @@PlaywithJunk Well that's a shame for such an expensive system... Ah well...