Seagate Mach.2 Dual Actuator HDDs | Investigating How They Actually Work

Поделиться
HTML-код
  • Опубликовано: 2 май 2024
  • In this video I'm investigating the Seagate Mach.2 dual actuator HDD technology. These drives have the potential to make a massive leap forward in HDD performance both for bandwidth and IOPS. We'll examine what Seagate has to say about their technology and I will share with you my thoughts. Also, we'll setup one of these Mach.2 drives in my server to check out how it actually works. Then, we'll run some benchmarks on the individual halves and also on a RAID-0 composed of both "halves" using Linux mdadm. Finally, I'll give my thoughts on the benchmark results and considerations for using these Mach.2 drives in RAID parity setups like RAID-5/6 or ZFS raidz1/2/3.
    If you want to buy Seagate Mach.2 drives on eBay, follow this link: ebay.us/Xo6eSC
    Timestamps:
    0:03 - Introducing the Seagate Mach.2 Exos 2X14 HDD
    0:48 - Seagate's explanation of dual actuator HDDs
    1:28 - Advantages of Mach.2 dual actuator drives
    1:50 - My thoughts on the advantages of dual actuator drives
    3:50 - Setting up the Mach.2 in my test server
    5:32 - Examining the Mach.2 HDD in Linux
    6:33 - Examining Linux logs to see how the Mach.2 presents itself to the OS
    10:50 - Checking the SAS-3 link with lsiutil
    11:21 - Examining SMART data of Mach.2 drive with smartctl
    13:13 - Fio benchmark sequential read test on individual "sides" of the Mach.2
    14:44 - Fio benchmark sequential write test on individual "sides" of the Mach.2
    15:35 - Fio benchmark random read test on individual "sides" of the Mach.2
    16:27 - Creating RAID-0 with Linux mdadm on Mach.2 drives
    17:19 - Fio benchmark sequential read test on Mach.2 RAID-0
    19:36 - Fio benchmark sequential write test on Mach.2 RAID-0
    19:51 - Fio benchmark random read test on Mach.2 RAID-0
    20:33 - Linux mdadm RAID-0 might not be optimized for Mach.2 technology
    21:15 - My thoughts about Mach.2 vs traditional single actuator HDDs
    22:16 - Considerations for RAID parity setups (RAID5/6, ZFS raidz1/2/3)
    23:21 - Outro
    If you'd like to support this channel, please consider shopping at my eBay store: ebay.to/2ZKBFDM
    eBay Partner Affiliate disclosure:
    The eBay links in this video description are eBay partner affiliate links. By using these links to shop on eBay, you support my channel, at no additional cost to you. Even if you do not buy from the ART OF SERVER eBay store, any purchases you make on eBay via these links, will help support my channel. Please consider using them for your eBay shopping. Thank you for all your support! :-)
    #artofserver #seagate
  • НаукаНаука

Комментарии • 130

  • @vitoswat
    @vitoswat 28 дней назад +30

    I believe disk logic should create internal raid 0 and present it to OS as single drive taking the performance gain. Then you can add it to the pool as normal hdd w/o concerns how to threat your halves.

    • @KaldekBoch
      @KaldekBoch 28 дней назад +2

      Agreed. Useless for an unraid array for example.

    • @Sullrosh
      @Sullrosh 28 дней назад

      I wonder if unraid would vount it as 2 drines towards the limit

    • @Gastell0
      @Gastell0 28 дней назад +10

      That's exactly how SATA version of this drive works

    • @GrishTech
      @GrishTech 27 дней назад +7

      @@Gastell0exactly. The sata version works almost like this, however, in the sata version, you can get double the performance if you software raid it by size. Wendell did a great video about this limitation in sata and how to workaround it.

    • @ArtofServer
      @ArtofServer  27 дней назад +6

      Yeah, I think that might be the direction this technology will evolve into. I believe there's a SATA version of this that behaves like that, but I don't know how it handles the data balancing. Then, maybe after that, they will split the heads even further into quad actuators! and achieve 1GB/s throughput!

  • @chrismoore9997
    @chrismoore9997 28 дней назад +20

    I appreceate you putting this video together. I have a 20 pack of these drives and this info will help me better plan how to use them.

    • @ArtofServer
      @ArtofServer  28 дней назад +5

      cool! glad it will help you out! :-) I had a few people ask me about these a while back and I wasn't sure how they worked. decided to just buy one and check it out. hope this will be useful to many!

    • @chrismoore9997
      @chrismoore9997 26 дней назад +1

      @@ArtofServer - I will likely use this with my TrueNas system but I will have to manually ensure that each half of the drive is in a different vdev.

    • @IndigoVFX
      @IndigoVFX 21 день назад +2

      ixSystems could really help here by adding logic to provide warnings to any unsuspecting sysadmin who’s not acquainted with the drive architecture.
      That said I haven’t personally seen how the drive presents in TrueNas - it does after all have two distinct LUNs, and that should be a clue to most people to use caution and plan a VDEV layout accordingly.

    • @boosthacker
      @boosthacker 9 дней назад

      @@chrismoore9997 There is a Level1Techs forum post where a guy made a script that does this for you. I can't find the link but maybe you can. Seems like ALOT of ground prep/work if there is a script to do it for you.

  • @esra_erimez
    @esra_erimez 28 дней назад +12

    I'm not a system administrator, but this video was really very interesting

    • @ArtofServer
      @ArtofServer  27 дней назад

      Thanks for watching and commenting!

  • @PeterBatah
    @PeterBatah 25 дней назад +1

    Another very insightful presentation. Thank you

  • @braixeninfection6312
    @braixeninfection6312 13 дней назад +1

    It seems so simple but awesome! This will probably be a pretty huge advantage for hdd evolution in the future. Though I was disappointed you didn't take the drive apart to show the guts. I was wanting to see how everything is working together.

    • @ArtofServer
      @ArtofServer  12 дней назад

      😂 I do still want to use this HDD for other things. I wouldn't want to risk contaminating it by opening it. The Seagate has a video of this drive with a clear cover so you can see it in action.

  • @jeschinstad
    @jeschinstad 13 дней назад +1

    It's been a very long time since I was impressed by an HDD, but this was actually rather cool. It would've been interesting to see tests of other ways of raiding them, like using btrfs or zfs. I think that could plausibly make a significant difference. For all I know, there's mechanical reasons why you don't get double performance when using both at once and perhaps they're more designed for sequential throughput? That is still a great thing, even if you don't get the same performance boost on random io.

    • @ArtofServer
      @ArtofServer  12 дней назад

      I definitely think it is interesting.

  • @deineroehre
    @deineroehre 24 дня назад +6

    If they wouldn't be from Seagate I would buy this in a heartbeat.

    • @ArtofServer
      @ArtofServer  24 дня назад +3

      HGST also released similar technology, but I haven't seen it in the used market.

    • @TopiasSalakka
      @TopiasSalakka 17 дней назад +1

      There's nothing wrong with Seagate today.

    • @battokizu
      @battokizu 15 дней назад

      Yeah if this was a decade ago, but honestly you'll get random failure that is way outside your purchase from the guys at warehouses and delivery rather than anything wrong with the drives themselves. If failure was so great then people wouldn't buy spinning rust anymore.

  • @dorinxtg
    @dorinxtg 25 дней назад +1

    I'm using smartctl on both proxmox nodes and Truenas scale, both don't show the messages like you have. Did you configure anything to show it with these (very helpful) messages?

    • @ArtofServer
      @ArtofServer  25 дней назад

      No. But keep in mind that SMART output for SAS drives vs SATA drives are very different. In this case, it was a SAS drive. If you're looking at SATA drives, the output will be different.

  • @BriceBentler
    @BriceBentler 20 дней назад

    Really helpful information. Thanks for calling out the redundancy bit at the end.

  • @JohnPamplin
    @JohnPamplin 28 дней назад +2

    Yeah the only concern I would have is motor failure, which takes out "2" drives. I think laying them out in RAID50 might work?

    • @ArtofServer
      @ArtofServer  27 дней назад

      I don't think I would do anything with a single parity setup though... i would choose double parity (RAID6) at the minimum.

  • @Koop1337
    @Koop1337 28 дней назад +2

    Interesting stuff! Will be curious to see how much they sell for. Any idea what capacities they go up to?
    Obviously there is risk in using these with say, ZFS, but I think with the right setup it could work out. Spread across lots of mirrors perhaps? Or maybe it'd be too complicated for it's own good and if you really want better performance you should just look to flash. Either way really neat, thanks for sharing!

    • @kingneutron1
      @kingneutron1 27 дней назад +2

      @@_-Karl-_ I think for zfs mirror you would want something like ' mirror sg1a sg2a mirror sg1b sg2b ' where sg1a is drive1 1st platter, sg2a = drive2 1st platter, and document it somewhere

  • @jamesbutler5570
    @jamesbutler5570 28 дней назад +5

    That took long enought. Conner actually patented a hard drive design that utilized dual actuators.

    • @ArtofServer
      @ArtofServer  28 дней назад +4

      Connor was acquired by Seagate in 1996. So they revived some old IP for mach.2?

    • @jfkastner
      @jfkastner 27 дней назад +1

      I remember that one ... never caught on
      hxxps://en.m.wikipedia.org/wiki/File:Conner_Peripherals_%22Chinook%22_dual-actuator_drive.jpg

    • @stonent
      @stonent 9 дней назад

      Also either Connor or Seagate at one point made an OEM drive pair that took 2 IDE drives and presented them as a single drive with the combined size of both. I remember seeing it for the first time on an AST system. When either drive was unplugged it would not detect the remaining drive.

  • @iankester-haney3315
    @iankester-haney3315 28 дней назад +1

    As I understood the documentation, using a dual channel cable can independently cobtrol each half. The drive electronics just compensate with a dingle channel cable.

    • @ArtofServer
      @ArtofServer  27 дней назад

      It might be interesting to try this drive in a dual port setup so see what the secondary SAS port will see. I suspect it will present a different SCSI target ID with 2 LUNs as well. Have to see if I can find a cable to do this as I don't have any dual SAS expander backplanes for 3.5" format.

  • @_ugabuga_
    @_ugabuga_ 28 дней назад +2

    Soo all this made me wander. If they show one drive with two parts if the drive fails both parts will be gone, because they share the same electronics and so on. So if you make a ZFS with Z1 and two parts are gone will this lead to a broken pool? Will it be safer with this drives to make a Z2 pool then to counter for the split personality of this drives...

    • @ArtofServer
      @ArtofServer  27 дней назад +2

      Even with a raidz2/3 pool, I would not put more than one LUN from the same drive into the same vdev.

  • @greggv8
    @greggv8 15 дней назад +1

    I was hoping it was doing RAID 0 internally so it was transparent to the system. But it's just two drives in one housing that can be accessed via one connection and I assume a port multiplier built into the drive. So what's the benefit if RAIDing separate drives can achieve the same throughput? It's late so I'm going to bed after watching the first 7 minutes.

  • @minigpracing3068
    @minigpracing3068 20 дней назад

    Interesting, I'd probably mirror each drive within itself, then some form of raid array. This should bring the iops up since the mirror can then access parts of the files on each actuator stack. It's a theory and I may need to look into this for my next nas hardware refresh. Also need to contrast cost and performance against SSD.

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 24 дня назад +4

    If you have been in the game long enough *ALL* hard drive manufacturers produce bad drive models from time to time. So saying you don't recommend manufacture X because of problem Y is frankly as dumb as hell. Besides at this point there is only Seagate, Western Digital and Toshiba left standing. At work I currently have hundreds of Seagate drives in use and the failure rates are low, astoundingly low given that a couple hundred of the drives are over 10 years old.

    • @ArtofServer
      @ArtofServer  24 дня назад

      You're entitled to your own opinions. But if manufacturer X keeps producing problems Y1, Y2, Y3, etc., at some point you realize maybe stay away from manufacturer X. I think that's a reasonable response when you've been around long enough to notice a pattern. Still thinking X makes great products after seeing that pattern would be frankly, dumb as hell. I have no brand loyalties, and if manufacturer X changes their patterns in a positive direction, I have no issues with their products and would use them. Even my favorite HDD maker HGST (now WD) was born out of IBM storage, which had produced their infamous "DeathStar" drives, which I avoided back in the day.

    • @jonathanbuzzard1376
      @jonathanbuzzard1376 22 дня назад

      @@ArtofServer The point is manufacturers X, Y, and Z have all produced and continue to produce from time to time dud models. As such avoiding manufacture X because of some historical bad model is nonsensical. Seagate is not particularly worse than other manufacturers as the data from Backblaze shows. What is more important is to avoid bad models than bad manufacturers as this makes a much bigger difference by quite a considerable margin.

  • @phildegruy9295
    @phildegruy9295 27 дней назад +1

    You need to get a couple drives and setup a system with a SAS 2 and 3 backplane and install the drives there in the backplane. Do the tests both with a single connection to the backplane and a redundant SAS connection which would use both SAS channels on the drives. Then redo the tests.
    I would also setup a basic install of Truenas Scale and do the tests with the drives in a mirror and a Z1raid. Checking if Truenas can actually handle these drives properly.
    There was a big argument on the old Truenas forumslast year with a lot of half info tossed out, where someone bought a bunch of these drives and the Pool created only showed half of the capacity and testing only showed half the capacity was recognized in their configuration. They wanted to know what happened to the other half of the drive capacity The argument was never really solved, and I think the OP of the drives sent them back and conventional drives were installed. (Maybe you got one of them). I believe there have been random reports of new drives in certain systems not reporting the correct capacity or acting weird, dropping known good and new drives etc.)

    • @ArtofServer
      @ArtofServer  27 дней назад

      I need to find some dual SAS expander backplanes for 3.5" format then... my only dual SAS expander backplane is for 2.5" format servers. And I guess I would need more of these drives.... though, I don't have a need for them so not sure I want to invest in an entire set of these drives to test an array setup.

  • @zedalert
    @zedalert 18 дней назад

    I wish they continue to grow this technology to separate head movement per platter and bring it to regular SATA drives. Maybe it will be already joined together as RAID0 inside the drive's FW.

  • @AfroJewelz
    @AfroJewelz 28 дней назад +1

    can i split this two part into 2 different vdev combine with outher similar dual actualtor disk forms raidz in one pool and still get parity i need? another question is , since it's still occupy single SATA/SAS port,will HBA card channels reduce to 1/2 when attache this kind of disk?

    • @AfroJewelz
      @AfroJewelz 28 дней назад

      @@_-Karl-_ 1 ANSWER IS IN MY ESTIMATE RESULT , I AM HAPPY WITH THAT risk/capacity & bandwidth efficiency, iops for SATA 3Gbps is not what i am concern since i only care big files on this kind of drive.the only problem is my HBA have 4 sets of sff8747 split into 16 SATA/SAS drive seems only occupied 8 channel of pcie links.wondering those 2actuator actual occupie how many those channels.

  • @tubeDude48
    @tubeDude48 27 дней назад +2

    I'm NOT a fan either!! My home is in Santa Cruz, CA And Scotts Valley, CA (where seagate is located) is just 7 miles up the road, (off of HWY 17). They have ALWAYS had bearing problems! SUGGEST: Use *CTRL-L* to clear the screen before each command you use. It's sometimes hard to see the bottom of the screen!

    • @ArtofServer
      @ArtofServer  27 дней назад +1

      Oh, Seagate has had more problems than just bearings. I can't count the number of times I've tried to help someone with their HBA controller only to discover some weird issue with a Seagate drive. So many bugs in their firmwares...
      thanks for the suggestion! appreciate it! :-)

    • @tubeDude48
      @tubeDude48 27 дней назад

      @@ArtofServer - True. To get some or, (maybe) all data back, I used SPINRITE from Gibson Labs! Steve does some incredible code, all in ASM! I had a client at a Banking institution, and they had a small 40GB drive that wouldn't boot. So I booted it with SPINRITE and let run over night, came in the morning, removed the CD, and it came right up, with all data recovered!

  • @deacbeugene
    @deacbeugene 17 дней назад

    Mdraid was created with 512k chunk size (part that is written to each drive before writing to another), so may be it worth setting much lower chunk size to test 4k block size performance.

  • @Matlockization
    @Matlockization 25 дней назад +1

    Very interesting.

  • @mc-not_escher
    @mc-not_escher 16 дней назад

    Interesting gimmick, I’d love to see how it fares with btrfs, but I’ll admit that I’ve been out of the loop when it comes to servers for a few good years. Most stuff I touch these days are SSDs. Kinda thinking it would have more than one failure point with extra actuators. Reminds me of those WD Raptor drives from back in the day.

  • @AmaroqStarwind
    @AmaroqStarwind 14 дней назад

    But can it go at 15000 RPM? I once saw a hard drive going that fast.
    And can they make an SSHD (hybrid drive) version of it?
    If they were to combine this with their tech for 240 Terabyte hard drives, high capacity flash memory technology (for the hybrid drive stuff), and ZFS…
    I think it would be a game changer for large datacenters.

  • @xtornado123
    @xtornado123 10 дней назад +1

    I was ask ixsystems for that drives when my friends company was buying storage from them, but this is not what they put into enterprise storages (yet) , was sayed that is not well tested to put into production

    • @ArtofServer
      @ArtofServer  8 дней назад +1

      It may be true that ixsystems haven't done a lot of testing of these types of drives. But these drives came out of a data center from a few years back, so there are enterprises that apparently have been using these drives for some time. as mentioned in the vid, care needs to be taken when planning the geometry of your ZFS vdevs when using these drives.

    • @xtornado123
      @xtornado123 8 дней назад

      @@ArtofServer yeah but on end it is interesting as 4 disk can saturate 20G easy :D , by the way thx for your videos i am learn a lot

  • @EduardoSantanaSeverino
    @EduardoSantanaSeverino 28 дней назад +1

    Hey thank you very much for the video, is it possible that you post the command lines used in the video? That would be awesome 👌 super B. Thanks.

    • @ArtofServer
      @ArtofServer  27 дней назад

      Thanks! What do you mean by the command lines? Are you not able to see them in the video? Or you want something you can copy & paste?

    • @EduardoSantanaSeverino
      @EduardoSantanaSeverino 27 дней назад

      @ArtofServer
      Correct, I am able to see in the video, but I was wondering if I could just copy and paste it. And I can save it some place for me to try it later.
      Of course, if possible. Thanks.

  • @kaseyboles30
    @kaseyboles30 26 дней назад +3

    It's not that new an idea. I recall someone experimenting with an actuator on each side of the drive so you had two heads for each platter. It was passed over because the platters had to be smaller to allow the second actuator on the other side to keep the drive at a standard size. They should have each platter's head move independently. you could (internal to the drive) treat each plater as a single 'drive' in a raid array.

    • @ArtofServer
      @ArtofServer  26 дней назад

      That's interesting. If you happen to find an article with more details about that, please link it here. I'd love to learn more. Thanks for sharing! :-)

    • @ruben_balea
      @ruben_balea 12 дней назад

      Those were Conner Peripherals *Chinook* drives, named after the Boeing CH-47 Chinook tandem rotor helicopter, the platters were normal sized but the frame was 5.25"
      In 1996 Seagate acquired Conner with all of its patents.

  • @ThomasWinders
    @ThomasWinders 16 дней назад

    2:10 - what you're talking about is basically "raid 0": since the two units can be seen separately by the system, you can configure them as so... this disk overall looks to me a compromise between disk capacity and speed. If you don't mind the risk of loosing all your data in case of failure, ofcourse.

    • @neins
      @neins 10 дней назад

      Could it be setted as "raid1"?

    • @ThomasWinders
      @ThomasWinders 10 дней назад

      @@neins If the two units are visible separately, It should be possible. You'll gain redundancy security, but you'll miss the benefit of the speed gain for parallelizing the procedure of writing and reading.

  • @ericfranklin6229
    @ericfranklin6229 28 дней назад +1

    Mirrored stipes may work good with this setup. But one issue for SOME people is that if their OS has a limit on how many physical disks they can have (UnRaid) then this would count as 2 disks towards their license. And I defialnately wouldn't use this for parity.

    • @Gastell0
      @Gastell0 28 дней назад +1

      Use SATA version of the drive instead of SAS, it's effectively a transparent raid0

    • @ArtofServer
      @ArtofServer  27 дней назад

      Good point on the drive count license issue! Thanks!

  • @FloridaMan02
    @FloridaMan02 22 дня назад

    How is the performance on that new fangled cpu you got there lol

    • @ArtofServer
      @ArtofServer  21 день назад

      It's fantastic! Thanks! :-)

  • @topstarnec
    @topstarnec 28 дней назад +1

    video is dark. Would you like to adjust brighter for next video? Thanks

    • @ArtofServer
      @ArtofServer  28 дней назад

      Which part? The overhead camera or the screen recording? Thanks for letting me know.

    • @fujitsubo3323
      @fujitsubo3323 28 дней назад

      @@ArtofServer nope i have a higher end content creation monitor setup almost perfectly for everything and the video was not dark it looked great

    • @topstarnec
      @topstarnec 27 дней назад

      @@ArtofServer the terminal, dark theme with green is a bit hard to read for me.

  • @KellicTiger
    @KellicTiger 27 дней назад +1

    Is others have said I really don't like this design from a standpoint of fault tolerance. You really need to have greater than raid 6 or RAIDZ2 as a single drive could bring your system precariously close to a failed array. Unless you stagger them. So each disk is on a different zpool. Even then it increases the odds of degrading multiple pools at the same time. I'd rather see this technology integrated into a single drive and increase the overall throughput through SAS. I mean you mentioned or actually demoed how you can do a raid zero through the operating system. And well that's nice it'd be nice to actually see that simply done at the disck level bypassing the need to actually mess around at the operating system level. Because I'm thinking that this might be a nightmare on something like TrueNAS Scale.

    • @ArtofServer
      @ArtofServer  27 дней назад +1

      You have to be very careful how you assign the LUNs to the vdevs in ZFS / TrueNAS scale / etc. If this technology was more widely adopted, you could implement a check to make sure all drives assigned to any particular vdev are not from the same SCSI target and issue a warning when trying to assemble such a pool. And as you said, if one of the 2 LUNs has problems, when you pull the drive to replace, both LUNs will go offline, affecting 2 vdevs even if you carefully assign the LUNs to different vdevs. And yeah, if I were to try this, nothing less than raidz2.

  • @tristankordek
    @tristankordek 27 дней назад +1

    😊👍 THX

  • @redwolf_dane4672
    @redwolf_dane4672 28 дней назад +1

    the mach 2 drives are great for unraid systems, makes a world of difference

    • @ArtofServer
      @ArtofServer  27 дней назад

      are you currently using Mach.2 drives? if so, how many?

  • @gsestream
    @gsestream 19 дней назад

    old multi-head tech, you could move each read head independently.

  • @uhohwhy
    @uhohwhy 15 дней назад +1

    Oh well, i thought you will open it....

  • @jayfara2282
    @jayfara2282 26 дней назад

    two drives in one like a RAID 0? how much faster can one get if you RAID 0 a RAID 0? 😱

  • @bitosdelaplaya
    @bitosdelaplaya 20 дней назад

    Totaly right with you. Seagate drive quality since 5 years or more is very poor. I use generic seagate 2tb drive on NAS in my company and lifetime is an horror.

    • @ArtofServer
      @ArtofServer  20 дней назад +1

      Yeah, sorry to hear it! I know the pain because I talk to thousands of people every year about their storage server builds, and so many people run into issues with Seagate more than any other brand.

    • @bitosdelaplaya
      @bitosdelaplaya 20 дней назад

      @@ArtofServer And yes many year ago cheetah drive are THE reference

  • @AbdelhamidMohamed
    @AbdelhamidMohamed 27 дней назад

    Make video for how to turn hp z840 to ai server ollama

    • @ArtofServer
      @ArtofServer  27 дней назад

      what does that have to do with this video?

    • @AbdelhamidMohamed
      @AbdelhamidMohamed 27 дней назад

      @@ArtofServer we need the same application done in ruclips.net/video/Wjrdr0NU4Sk/видео.htmlsi=A5xnU4gXFjufkMHZ

  • @SirHackaL0t.
    @SirHackaL0t. 28 дней назад +1

    Isn’t the idea of this that the data is spread across both areas by the drive itself?

    • @ArtofServer
      @ArtofServer  28 дней назад

      That would probably be the next evolution of this technology. But as it is, no. Thanks for watching!

    • @SirHackaL0t.
      @SirHackaL0t. 28 дней назад

      @@ArtofServer It was interesting to see it show as two separate drives.
      Btw, your green font in the terminal is way too dark for YT. :)

  • @sioux22
    @sioux22 15 дней назад

    They could've just used two heads on different sides of the drive. A bit more expensive but a huge missed opportunity.

  • @martontichi8611
    @martontichi8611 27 дней назад +1

    I love SAS

  • @Arimodu
    @Arimodu 17 дней назад +1

    This is super insteresting technology.... Not really interesting for a homelab but certainly for the enterprise space.
    Also, funny how the world works, you say you dont like segate, yet for me, I had every drive under the sun fail on me except segate.
    Specifically iron wolf and exos, had only a single failed 2.5 inch barracuda (and that one was not even mine, just had to fix it for data recovery, which was about 80% successful)
    On the other hand I have
    2 failed WD Elements drives, both just outside of warranity - one had complete data loss, fortunately not critical data, thats why there was no backup
    7 failed WD Reds from my uncles NAS, all of them after about 2 - 3 years (uncle always pairs up WD and segate in the same array - yes, not good, I keep telling him - never had a failed segate yet, those drives are like 10 years old now)
    Then there is my NAS, got an assortment of segate drives
    3 iron wolf drives, 7 years and going,
    9 Exoses - 4 years and going
    I am not one for brand loyalty, nor am I saying that your experience is invalid. I just found it interesting how different my experience is. To be fair, I probably have way less experience total, as I dont have any actual servers, I just build from off the shelf hardware for whatever I need at home.

  • @GeoffSeeley
    @GeoffSeeley 28 дней назад +1

    Interesting, but I still don't trust Seagate with my data. As you touched on, care would be needed for ZFS use as the potential for failure is higher if you don't aggregate different VDEVs across the physical device.

    • @ArtofServer
      @ArtofServer  27 дней назад

      I would agree. Although I think Seagate was the first to implement this, I think HGST also have dual actuator products. I just can't get a hold of them yet...

  • @yuan.pingchen3056
    @yuan.pingchen3056 26 дней назад

    The question is, does it make sense to theoretically have twice the IOPS? If it will bring higher holding and maintenance costs, it will be more difficult to compete with ultra-high IOPS NVME SSDs. I hope to have ultra-high capacity and ultra-low power consumption(motor rotation speed). I wish the HDD manufactures keep placing INTEL 3DXPoint technology's NVRAM on the hard drive's controller board, whether as a buffer, as cache or for storing metadata(the data used to describe data on the disc).

  • @MarkDeSouza78
    @MarkDeSouza78 28 дней назад +11

    I see this as incredibly dangerous... Imagine you are running your zfs pool as raid Z1 and the controller or motor dies a Mach.2 unit, this means that 2 "drives" drop out of the zpool and data loss occurs. Whilst its true you could build your pool around this feature, its a mistake just waiting to happen.

    • @ArtofServer
      @ArtofServer  28 дней назад +15

      I do think it requires careful consideration when used in a parity RAID scheme.

    • @alpine7840
      @alpine7840 28 дней назад +2

      @@ArtofServer OMG! Great catch. That is a nightmare that WILL NOT MIGHT happen.

    • @Dr_b_
      @Dr_b_ 28 дней назад +5

      You TEST failure modes after building your Vdevs though right? Be smart how it's architected and these drives are fine

    • @BrunodeSouzaLino
      @BrunodeSouzaLino 28 дней назад

      RAID 6.

    • @johnpyp
      @johnpyp 27 дней назад +2

      You can just use the drive in raid0 LVM as its own drive in ZFS. Works great, I’ve been doing it and get great performance out of it.
      The drive will fail as a unit, as a single drive :)

  • @SB-qm5wg
    @SB-qm5wg 25 дней назад +1

    I like the concept but it would be price mostly that would make me get these. It's still a single point of failure.

    • @ArtofServer
      @ArtofServer  25 дней назад +2

      Price per TB? On the used market, it is pretty competitively priced.

  • @rphilipsgeekery4589
    @rphilipsgeekery4589 27 дней назад

    You are doubling the failure rate ?

  • @ertanerbek
    @ertanerbek 28 дней назад +1

    If we want to see real performance in mechanical disks, we must use the two-head technology for a disk group patented by Segate. Fitting two disks into one box just saves space.
    If we can use two or more read-write heads for a disk group, then we can start talking about real performance in mechanical disks.
    This is just a little vaccine for the survival of mechanical discs, we need real solutions.

    • @ArtofServer
      @ArtofServer  27 дней назад

      Seagate has such a patent? Have they ever prototyped a HDD based on that IP?

    • @ertanerbek
      @ertanerbek 26 дней назад

      @@ArtofServer if youtube delete my previus message you can search from google, tomshardware already make news about that " seagate dual head hdd patent "

  • @billycroan2336
    @billycroan2336 27 дней назад +2

    Sounds more like half than twice. Sorry Seagate. Come back when you have two independent sets of heads and that will really impress me. How nobody has done that yet is beyond me. Heads on both sides of the drive. You could put two independent controller boards on it too. For use with cluster file systems from two different hosts. Or use the multiple paths for more throughout. I bet you could physically put three sets of heads on the same platter, 120 degrees apart though at that point you have no hope for it to resemble the standard form factor. I wonder if that would increase heat a lot

    • @j_taylor
      @j_taylor 27 дней назад

      Your suggestion gave me flashbacks from FCAL. Except FCAL was less complicated. Lol

    • @WolfGamerBohumin
      @WolfGamerBohumin 23 дня назад

      Do you mean something like Conner Peripherals "Chinook" HDD?

    • @billycroan2336
      @billycroan2336 23 дня назад

      @@WolfGamerBohumin yeah. exactly that, but with modern rpm and cachin...... oh wait, that would be the problem. seperate caches would corrupt eachother's disk contents. So I guess you couldn't have completely isolated PCBs in any practical way. But MAYBE you could still have two head sets. and independantly articulating arms on each set to improve throughput.
      Perhaps even put multiple heads on each arm, so that one can read the inner 55% of the platter and the other the outer 55%, and each arm now only needs to move half as much
      But I bet that would increase heat a lot from more friction and more coils moving parts.
      SSD it is i guess.... just still feels a little hard to swallow SSD for what it costs in the dozens of TB range today compared to HDD

  • @lee98210
    @lee98210 24 дня назад

    double the failure chances from 2 heads too.

    • @ArtofServer
      @ArtofServer  23 дня назад +1

      not really. the same number of heads as traditional drives. they just move independently in 2 sets. the issue is that if one fails and you have to replace the drive, both LUNs, including the other one that may not be defective, are removed from the system.

  • @turbodog99
    @turbodog99 16 дней назад

    On hell no

  • @cameramaker
    @cameramaker 28 дней назад +1

    So HDD manufacturers are deliberately crippling their products - for double the speeds you do not need a separate actuator - you only need an ASIC which allows to connect more read/write heads! The surface bitrate of this Mach.2 drive is exactly same as on conventional drive (or even less - if this would need 2 servo sides compared to one). So why no vendor has a higher performing chipseet, and all they do is to mux all the heads (like 10+ now) into like 2-3 channels for the asic.

    • @BrunodeSouzaLino
      @BrunodeSouzaLino 28 дней назад +1

      The latest thing is the mosaic drives, which heat the platter to allow for more data density. That is more dangerous because you can heat a magnet to a point that it permanently loses its magnetic properties.

    • @ArtofServer
      @ArtofServer  27 дней назад +1

      has there ever been a PoC on such technology as you described?