Hey Wendel, In 1997, on a Canadian Navy frigate, I serviced an 80MB hard drive that was the size of a small kitchen stove. The PLATTER section was removable so you could calibrate the read/write heads. There were two of them and they ran the ENTIRE command and control. Radar, weapons etc. Yes, 80 MegaBytes was handling the operating system that controlled missiles and guns. Heh. Shortly after that they got replaced with off-the-shelf RAID solution AFAIK. Oh, and we had a communications device the size of a fridge that had a readout that used HEATED elements because it preceded LED's. In 1997! And... if you really want a laugh I was standing there watching the Signals Operators do a black-out drill and noticed they were shouting out commands like "restart crypto" so after that I asked what they were doing. It turns out the BATTERY had died so every time they did these power blackout drills that had to restart the cryptography device. I said "would you like me to put batteries in it so you don't need to do that?" They said "It has batteries?" The problem had been happening for so long they'd put the normally unnecessary procedure into MANUALS. (and all the clocks were wrong and had been for a couple years. I went over to the next ship over, borrowed the manual that we no longer had then fixed it.)
~300MB/s up to ~500MB/s is a great jump, but what interests me more is the possibility of lower -seek times- latency (thanks for pointing that out seek times on both sbu-drives will be the same, but its like splittling the queue in half)
@@jackwhite3820 If I understood everything correctly, it will be the same* *: the same as a similarly configured RAID. Which should be faster than one drive.
Seek times are relatively the same on each disk subsystem, however the net effect is twice as many IOPS/TB in the same disk slot vs a conventional drive.
It's funny how people keep saying that hard drives are dead, but they keep finding ways to keep them relevant, even outside of capacity. I'd love this for the working data drive on my server. I'll be really interested to see the random read/write performance as well.
@@Kanakarhu Even in server related groups, I've been laughed at for saying tape is still quite relevant and is still widely used in the enterprise space as backup. Those people are simply ignorant. They're enthusiasts that think they're super hard core, but don't really know the market when it comes down to it.
@@iaial0 They're incredibly cheap for their capacity. You can get a 20TB HDD for less than the cheapest 8TB SSD. And yeah, they're great for cold storage and backups, but HDDs are still more versatile than that
@@TheGameBench SSD's are great for cold storage. And especially for catastrophic loss backups, because you can get back up and running much quicker. It can mean the difference between being out of business for hours or days, which more than makes up for the extra cost of SSD's. Things like tape also need specialized storage facilities. For SSD storage all you'd really need is to power them up once every year to ensure data retention, and if you really want to ensure data retention, to have a mirror backup that copies over between two drives to ensure the write charge is fresh. And this all can be automated inside of a suitcase sized container that can be stored almost anywhere. Companies only stick with tape backup because they're already invested in it, and because of business insurance purposes. Basically, big tape has the system locked down, ensuring that what insurance covers will be at jeopardy if businesses use anything else.
SIR!!! I have always marvelled at the achievement of electro-mechanical engineers in the field of Hard Drive Technology. To me, these achievements rank among mankinds greatest so far. An these chaps are don't get much applause. Just When you think the Hard Drive was a thing of the past It Rises Like a Phoenix Once Again. "The Hard Drive is Dead! Long Live the Hard Drive!"
And the HDD companies hit back on speed against SATA SSDs. This is really cool tech and im excited to see if these will end up becoming affordable for consumers eventually
Looking at the prices, its ~13%, ~$30 more compared to the normal Exos. It also consumes ~2 watts more. If you need performance and space is an issue its good. If space is an issue but performance not then the normal Exos is fine If performance is an issue but space isn't then a few 10 or 8TB's instead.
@@GraveUypo this is fair but these will be very nice for large storage systems where the really critical operations aren't random, re-silvering and the like. Both the storage density and the speeds are extremely welcome and I'm hoping these drop a little in price by the time I'm ready to re-disk my JBOD
SSD's are soo cheap and have soo many upsides, that these are basically a novelty. The main benefit of SSD's, other than speed, is their longevity. Even when they can't write anymore, you at least can still read the data on it. That's a big deal for data storage. Now of course one could argue that you can get more capacity for cheaper than SSD's, but if that capacity doesn't come with speed and low latency, then even in the business space it's pretty useless these days. This is a product that would have been amazing 20 years ago, but is just meh today. It's just too little too late. And the latest NVME's are ultra compact, these things are putting 4TB of storage at 7GB/s and 1 million IOPS in the size of a small thumb drive, at pretty affordable prices.
It's great that SuperMicro has these for brand new with manufacturer warranty and everything for a reasonable price. Server Part Deals has had refurbished 2x14 SAS and 2x18 SATA models in stock off and on for $150 and $220 ea. respectively. It's probably worth buying these new for data you care about... or use the savings to buy a few spares! Two or three of these can keep a 10GbE link saturated so they are perfect for a home NAS without a ton of drive bays. Hopefully this will be the last time I buy spinning rust for my personal data hoard. Thanks for the video Wendell!
I've been waiting on these for years and now they are available and I don't need them. Looking forward to seeing what the difference between the SAS and SATA version is.
@@mrmotofy I have spares of the same model in production, and even a bunch of used Coolspin drives that I intended to use for a backup server if I absolutely had to.
Damn... And here I am, just installed x2 8TB QVOs, start xfering over ~3.5TBs of video files, and quickly watch xfer speeds plummet to ~150MB/s once the cache ran out.
i had 4 of those in raid 0 (before ssd prices plummeted) which brought the speeds up to meaningful levels, also because you could write 4 times the amount of data at once till the cache run out. but without striping several of those its indeed very 2010 feeling.
Heh, there was a time, about 25 years ago, when 15 MB/s would have been considered crazy fast and you needed a RAID array to achieve anywhere close to 50 MB/s and here we are looking down at many times more than that speed. How quickly technology marches on and how rapidly we humans adapt and get used to nicer things..
You had me at "eleventy" I thought I was the only one who said that. Love these drives and I can't wait to get all those "gigglebytes" of read/write speed.
I've got a couple of questions - Are these dual heads quieter than a single head drives during read/write operations, as there's less mass moving and slowing during a seek? Or louder due to 2 separate seeks able to be performed at the same time. Whats the power requirement differences between single and dual. In the enterprise world, we dont really care. But with SATA versions due, I can see that the "average" SOHO and homelaber who may take note, might care a lot more as KwH=$$$$ these days. Overall, really cool. However in the coming years they might have a tough market to compete in should flash prices keep dropping at it's current rate.
I have a pair of the 2x18 drives in my workstation, and seeks are definitely quieter than the Exos X18 drives they replaced. Like other helium drives, there is little to no rotational noise.
I would expect slightly higher average power consumption due to controller complexity and the need to move and rotate the platters and heads at the same speeds while using smaller actuators, but almost certainly lower power consumption than two separate drives.
Normal Seagate are fine and have been for years. It's their cheap crap you have to look out for. Usually you only find those in OEM stuff though. WD didn't earn this reputation as much because they didn't really do cheap OEM crap (or if they did they refused to even be associated with it.)
@@CreativityNull Fact check: WD is a massive OEM supplier for Dell, Lenovo, HP, and others. They simply manufacture their bottom tier to a quality above Seagate's mid-tier so they don't get the bad reputation.
@@tim3172 and more expensive in more expensive models (at least for the spinning drives when they used them awhile ago in these devices out of the factory.) As far as I know though, that's mostly for SSDs currently, not for spinning drives except rarely in the past, which is where Seagate has the bad reputation. Seagate got the reputation over a decade ago and it's not even relevant anymore since the low end stuff isn't using the Seagate spinning rust and is instead using eMMC.
it amazes me that the air current (well, helium to be precise) from one head moving doesn't disturb the positioning of the other head. Or, how do they know that certain seek patterns can't create an oscillating gas current that could amplify and cause errors or even a head crash.
The two actuators sit on the same actuator axis, so the upper actuator has the heads for platters 1-5 and the lower actuator for 6-10, say. So interference should be very low to begin with. This also means that these might be scaled to three or four actuators, because the footprint doesn't really increase.
This is an improvement in storage density. Nothing else. All other advantages are already available thru software trickery and multiple separate HDDs working in tandem.
Oh I thought someone had finally made the idea I had probably 20 years ago of multiple heads per platter. This doesn't seem much cleverer than jamming two drives into one box, especially with added complexity of having to figure out which platters are on which drives. An interesting idea for sure, but feels like it needs more development time.
The 9mb seagate is awesome! Those old tech times were honestly way more cool than nowadays. Note: I bought my first PC with 40MB seagate drive in a 80286 12MHz PC's. But I liked those times A LOT!😊
Not in the 3.5" form factor, there's not enough Z height for all of the magnets and actuators. If they brought back the 5.25" drives, they could scale the capacity way up, but that'd present a different set of challenges.
Also look at the arc of the head movement. You have a density limit there as well and they cannot impact the laminar airflow that the heads ride upon. I'd imagine more heads already made that smooth air buffer choppier. I wonder if anyone has revisited Bubble Memory. It actually shipped in the early 1980's - terribly slow access times but the density was stunning. I think the great granddaddy of the modern clam shell laptop, The Grid Systems Computer, shipped with bubble.
If you want 4~6 actuators, go to the market today, buy 4~6 HDDs, slap them together in a raid config, and you have your "4~6 actuator" HDD ready in your hands today itself 😂
@@GGigabiteM 4 would be possible if they could get mirrored sets of heads. Have 2 stacks of 2 actuators. One stack doing the tops of the platters, and one stack doing the bottom of the platters. You would likely loose a platter due to needing enough height for the arms to overlap. With a helium drive it shouldn't be too terrible of a loss.
Very cool I still remember the Excitement when i got to play with my First Raid card and 2TB SAS drives for the first time, as good as SSD is there will always be a place for Mechanical drives.
Always wondered why we never added more optical sensors for disc drives to read more per revolution (or why we never continued to improve that working prototype 500gb bluray from the 2000s). Glad to see improvements yet again but it does feel like were abandoning formats that still got potential.
@@shanent5793 Huh TIL, that idea for splitting the beams is pretty neat and it might do wonders with modern advancements and components, I would also think there'd be issues on read speeds with normal discs as that kind of laser set up would need discs to burn data/programs in a specific sector layout designed for said lasers to read more data. Also the only Kenwood I ever saw was my dads Record player (Kenwood KD-2055) so that was an unexpected brand name to see for a computer part haha.
'91 or '92 I bought two HP 2GB drive the same size. It took forever to spin up and would vibrate the table. It had a chime when it was done spinning up and ready. Sounded like an airplane fasten seatbelt chime. I used an old sturdy XT case and ran them externally using IDE cables out the back of the main system.
I'm surprised Wendel and Linus haven't done this already I know Linus has needed to call Wendel on some of his networking and servers it's time to help the legend with us own petabytes project.
@@MinorLG I'll be posting on forum's finally got things coming together and 8x18 isn't enough plus I made the mistake of making a JBOD instead of a raid 5 with 2 hot spares I have over 45TB of data so this server will backup the main then I'll wipe it recreate with Raid 5 make 2 drives for fault tolerance.
As someone who worked on an exabyte scale Hadoop cluster, it literally takes thousands of machines in a data center. An exabyte is 1mil TB. Or 55 thousand of those new 18TB drives 😲
Looking forward to the video on the SATA versions. Fingers crossed when it comes that they're decently priced down in Australia, because that could be great for some things in planning soon
Oh, I was thinking they'd be on opposite corners to each other, so was wondering how they'd done it with the rust still at one end of the chassis. Only at the end when explicitly shown did it click that they're overlapping eachother. Did not expect.
Other than the amount of vdevs involved, are there any differences to doing the two raidzs vs first raid 0 - ing the drives and putting them into one raidz? One benefit of doing in with the two raidzs is that in the event of half a failure, you would not risk the second vdev at the same time, but that assumes extra slots. I'd be interested in seeing any performance differences, you'd avoid doing two expensive striping calculations, but you'd have to do a whole lot more "splitting"
HP had dual-actuator drives about 20 years ago, but they didn't catch on. Maybe they were too complex to control/integrate into RAID systems at the time? Too expensive? Too locked into a proprietary ecosystem? I dunno, but I've often wondered if the idea would crop up again. The thing that occurred to me at the time was, "that would really speed up file-copying on the same disk: not having to seek back and forth between the source file and the destination". At the time I was thinking about tasks like defragging, where lots have data has to be moved from one part of the disk to somewhere more appropriate. =:o}
Interesting to see what is on their website: "MACH.2 is the world's first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently" Based on your comment I wonder if they are wrong with that statement...
OK, found something: Conner Peripherals (which became part of Seagate) in 1991 announced dual-actuator drives. Chinook had 2 sets of heads on each platter. This has one head per platter, and is essentially stacking 2 HDDs in one case.
@@autohmae Interesting... I've just tried some Googling, and can't find any record of the earlier HP dual-actuator drives. Did they never actually make it to market? =:oo I learned about HP's design from a big show-off display that was in the lobby of one of their sites, where a friend of mine was working at the time. The images showed an elongated drive housing, with an actuator assembly at each end, accessing the disk from opposite sides, and the text talked about how innovative this was (of course!) and how much it could speed things up. Was I taken in by some CGI concept-art for an un-realised product, I wonder? (If anyone could do photo-realistic graphics at the time, HP certainly could! =:o} ) Looking at the images Seagate are sharing for how the Mach.2 works, they've got the two actuators stacked one on top of the other, with each one only able to address half the platters - hence the drive showing up as two separate devices. The HP design could access all platters equally, which would surely be preferable? But of course you then have the extra length of the housing to fit the extra actuator, which maybe made the product just too big to fit in existing machines. Certainly a 3.5inch drive would end up being as long as some of the early 5.25" optical drives, and have to be put in a 5.25" bay if there wasn't plenty of clearance at the back of the 3.5" bays... And that's just thinking about cheap consumer cases. Servers, with their rows of snugly-fitting exchangeable drive sleds, would be a whole different ball game (i.e. a whole new design required). But back then, SSDs were just a glint in their daddy's eye, so *maybe* it would have been worth it for some customers to invest in a whole new HP server, built to take the longer drives (and with the necessary "extra clever" controllers)...? (I think the pictures I saw were of a 5.25" design, BTW, but I could easily be misremembering.) I think we need an HP employee in this thread, stat! =:o}
@@autohmae [BRAIN SPARK] the name Chinook rings a bell... [SCRATCHES HEAD] ... But why would they have had details of a Conner product proudly displayed at an HP site? =:oo Now I'm totally baffled!
So these show up as two independent 9TB drives each? Does definitely take a bit of thinking to optimize the performance and reliability, but really cool technology!
Very nice! And yes, it makes perfect sense to do it this way. I am curious how this will work with SATA. As far as I know, you can't have multiple devices on a single SATA link, like you can with SAS and an expander chip. Will the drive have two SATA connectors perhaps?
I wonder what this does for reliability. How often would you lose one head & not the other? In consumer land 18tb is too much to lose to one disk, but 18tb is also too much to back up
Hey Wendell, exciting stuff, could you let us know about the power consumption? Also, I'm curious about if one was to mix the 16TB drives with paired 8TB SSDs, would that configuration improve performance if at all?
i was hoping it was gonna be 2 full sets of read-write heads, so that you could perform 2 parallel IO operations on the same disk and double the random IO performance. i was big curious how they fit a whole second arm mechanism in there but no, they punted on it and just split the one actuator in half :( what i don't understand is how this is a speedup, it's the same number of read-write heads traveling over the same surface area at the same speed. like the same number of bits per second are passing underneath each head regardless of if they're moving in sync or not.
Very interested in this. I saw back in Feb press release for ultrastar dc hs760, but haven't seen much news on them since. Wonder how these would work in Unraid.
I'm glad I caught this video on Mechanical HD's...and not ssds or m.2............they simply don't have the capacity........I really appreciate your presentations. I, clearly do not understand HOW these hd's actually function, but I have lost enough data over the years to bad PCB's I have a valid question, (I think) Perhaps you could explain why the Mfgs. can't seem to make any sort of uniform replaceable PCB for their HDs? how many Petabyte of data are lost every year because "donor" drives can't be found? or the technical skills of a solderer are lost on a microscopic resistor or capacitor? I'm sure you could supply a short informational vid?
I sometimes think Wendell is the only guy worth talking to if one needs to discuss technical things inside IT tech. These days it's way to much marketing blabla... Everyone is trying to hide the obvious red flags and design fails..... They will happily sell you a consultant... A guy half your age with less experience that will point out that the problem you describe is rather normal behavior and can only be solved by throwing more money at the problem...
I managed to get my hands on a SATA mach.2 drive. It doesn't show as 2 drives, only as a single 18TB drive. Did you make that video about the SATA version?
Would raid0-ing the 2 halves of the drive to make them behave like 1 fast drive then combining them into raidz2 be better or how would it be different than 2 vdevs of raidz1-ed half-drives?
I've been curious to know the price of these since they were announced a long while back, but I can see noe they're quite pricey (as would be expected). It's cool technology, but it still can't replace SSDs in terms of IOPS and random performance.
I wished they put 4 actuators in the same platter, would give insane amounts of read write speed and insane seek times with the heads having to accelerate and decelerate less.
Not only Europe uses the metric system (part of the SI system), all the continent of America (except USA) have used it for a long time, to be fair, in fact almost the entire globe!
why do harddrive manufacturers not just make the actual write head have like strips of side-by-side read/write heads, that could write and read parallel on-disk lines? you could build them with extra heads so that they could account for the fact that a head that's slightly further in towards the center or out towards the edge would account for that distance change. either by being slightly bigger, or having more read/write heads. 5 parallel needle-ends = ~5x faster. as you increase density you could end up multiplying your read/write performance since you'd be able to add an extra read/write head whenever the density really increased
If I've understood your suggestion correctly, the reason why they don't do that is because of the math's/geometry of circles and lines. The tracks/lines that are on the disk platter form an imaginary circle, whose center is the center of the platter, and whose perimeter is the track itself. The further away from the center of the platter a track is, the greater the area of the imaginary circle, the larger the perimeter that imaginary circle will be. The read/ write head is essentially a single point. It can be moved anywhere along the platter in any fashion, and read from any given track. When you have a stationary single point intercepting a rotating circle, that point will always stay the same distance away from the center of the circle, and for a hard drive that means it will always stay on a track. Introducing more read/write points changes the geometry of the read/write mechanism from being a simple single point, to some other shape, either a straight line or an arc of some kind. If we use a straight line of 5 points for example, then only the original point can be guaranteed to be on the right track. The four extra points will be further out from the center of the platter, and so they could cross over into the tracks that are further away from the center too. If we decided to use 5 points that were placed in an arc formation to mimic the circular platter, they would only be able to reliably read from tracks that form a circle whose perimeter shared the same arc as the read/write layout.
The only way to make that idea work, is if you found a way to actuate the read/write points themselves, instead of just actuating the entire head. There's an even bigger problem though. Lets say you've magically created a read/write assembly that is able to move the read/write points on the head dynamically as the head moves up and down the platter. And the result of that breakthrough was you could read 5 times as much data sequentially through the head.... The only way that breakthrough would increase read speed, would be if you also increase the rotational speed of the platter by 5 times it's original speed. That would mean having hard drives in your computer or server rack that have an RPM of 50,000 or more! That would make a lot of noise, consume a lot more energy, cause heat problems, wear out the platters faster, possibly damage the PC or server, or cause it to fall over. And the hard drives would need to be able to keep up with that speed without any errors, which is hard to make happen. You would have all those problems, and still, you wouldn't be anywhere near as fast as a modern SSD which can be bought for pretty cheap these days. Basically, the engineering behind HDD's is already so fine tuned at this point, and more progress on HDD technology is becoming exponentially more expensive, the rewards shrinking, and the SSD has already achieved much better results.
@@firstnamelastname-oy7es i don't think this would be an issue. think about this. over time, the track density has increased obviously by many orders of magnitude over time. originally drives like the ibm 350 had a density of 2000bits per square inch. drives nowadays have densities of 1Terabit/square inch or greater. Thus we have MANY more tracks in a modern drive than in a drive back in the day. it would simply be a matter of making a multipoint read head in a strip that has the furthest tip of the strip with multiple points (or instead of a strip, you'd have say 3-5 read heads, all in a line, (or more) and you could write to multiple heads at a time. essentially, you could format tracks differently, whereby tracks were thicker than they used to be, but files and data would get broken up into 3-5 pieces or more. or simply have 4 or more arms with read head strips writing to the drive at the same time (say at cardinal points on the drive. perhaps you could have the read arm be a straight bar of metal passing over the entire drive, fixed in place to the chassis of the drive even, with multiple drive actuators along the arm.
I think if they REALLY tried, we could an independent actuators per platter per actuator stack. 12*2 for iops potential. 24x random iops. Still no where near SSD's, but WAY better.
Is it just me or does it seems like there have been so many advances in HDDs lately? I had thought with time we would all just go SSD based for everything as a "simpler" solution, but I find myself still buying HDDs all the time because for large media storage, or just storage that is not speed critical, they are king.
I don’t have hands on experience with zfs, so I don’t know how wild/bad my idea would sound, but here it is: can we, per say, create on each drive it’s own “raid0” aggregated meta-lun from “halfs” (so we can treat it as one fast disk) and then create raidz1/2/3 from these fast meta-luns? From my perspective it will simplify handling of raid in case of drive errors Edit: now I reread it and realized what I’m proposing is basically raid 0x ¯\_(ツ)_/¯
no, redundancy in zfs is handled at the vdev level so at the lowest rung of the disk pooling shenanigans. The pool is just joining the vdevs it does not add rendundancy To make an example: You make a pool with two mirror vdevs, and that's kind of a RAID 10 but not really. You lose one drive, you are fine because you have another drive in the same vdev. You make a pool with two spanned vdevs. You lose a single drive so now one of the vdevs is gone and the whole pool is also gone.
I think RAID60 might be the best way to use these, this covers the 2 statistical HDA failures among the entire array. You get the performance of striping and still only 2 HDA overhead.
I've been wondering for some time why mechanical drives are still using moving heads. Is it not possible to design a read/write head that is a bar that spans one side of the platter and doesn't have to move? This seems like a logical evolution, assuming that it's not physically impossible. It might take some engineering R&D but reducing the moving parts on such a sensitive and critical device would be of paramount importance.
Physics is against you, there The minimum size of head to create a sufficiently focused magnetic field is several times the width of the track you're trying to write, so to give every track its own fixed-positon head, you end up having to fill the entire drive housing with heads and *still* can't get as high an areal density as an actuator-type drive.
@@therealpbristow I was thinking (in a conventional sense) that if there were discrete heads in the sensor bar then obviously they would have to divide the total number of platter tracks between them. In that scenario perhaps there would be a way to differentiate/focus a head onto individual tracks. This is where some R&D and innovation would come in. However, perhaps there is a way to make a sensor bar that doesn't need discrete heads but could sense individual tracks and modify them anywhere along the bar. This is purely imaginative speculation as it would require a new type of magnetic sensing technology to be developed.
That "30 year old" Seagate holds 9 GB, are you sure about that? I have a similar model (size wise) that just holds 40 MB, and it was a top of the line HDD back in the day.
Would be nice if SATA SSDs came in the 3.5" form factor. I don't think I'll ever be able to afford to make the 288 TB in drives I have now solid state, ever.
@@timramich Reason they don't is market share wont be high enough. 2.5" can be used in laptops and desktops(with and adapter plate) Heck we don't even need the full 4" of length on 2.5" SSDs. If you open them up the PCB inside is only 1" in length at times.
It's pretty simple to go fast, it just requires a quicksort combination as such; 9 - 8 + 7 - 6 + 5 - 4 + 3 - 2 + 1 (and '0' is a stop bit in general, its a universal marker, because if a bit is ever zero, are plans to go fast are brutally foiled) So, lets start easy, a seven segent combination in quickort, this is actally a jacobian determinant matrix operation in chapter 9 from Calculus 4; 111 110 101 100 011 010 001 (and 000 means stop) it;s very effective because the floating point and integer operations are rated at 80 gigaflops per core.
This was my question as well. I saw an Intel AIC 4TB or 3.84TB P4xxx card the other day that showed up internally as 2x2TB drives as well. Basically an 8x PCIe card that split internally to 2 U2 devices basically. Figured these would look like that too.
segate missed a trick here, this is essentially two drives in one package. Cool? Yes, very, however could just spend extra and get two drives for better redundancy as explained in the video. But what would have been REALLY cool would have been two actuators that read the same platters. That would mean that you would still have twice the read speed half the seek time (probably more since the platters would have to be smaller) and also twice the redundancy in some instances as you can just park a malfunctioning head and use the remaining one. Im not an engineer but I don't see why that type of drive isn't already widely used. I know that they have existed in the 90s but never really went anywhere and thats about it, though would be cool to see if its possible or why its not if it isn't for whatever reason.
two actuators on the same platter don't fit in a 3.5''. You would have to use one arm on one side of the disk and one arm on the other to avoid collisions. There is no space for that unless you REALLY reduce the platter sizes and then wtf are you doing at this point
@@marcogenovesi8570 I did mention the platter size would have to be reduced a bit. Though, I don't see why that would be too much of an issue since most sas drives have smaller platters for better seek times. Or at least they used to, not sure if they are still like that.
Id be curious to see how these drives perform in a Ceph array. Even if one half of the drive dies, it could help a datacenter limp along or have higher levels of redundancy in the same hardware footprint.
Are these even available to regular consumer's been keeping a eye out for some since I can get a full Epyc system for just under $2000 figure time to build a good server now. Invest in proper great HDDs.
For something on the drive itself, single actuator with dual surface striping ought to be feasible, buffer 2 tracks in a cylinder and read / write them simultaneously, much more feasible for drive logic to handle.
@@marcogenovesi8570 thought I’d recognized NetApp in the Netherlands I’ve never seen a datacenter with 45drives equipment. So I thought maybe they’ve stolen intellectual property.
Hey Wendel,
In 1997, on a Canadian Navy frigate, I serviced an 80MB hard drive that was the size of a small kitchen stove. The PLATTER section was removable so you could calibrate the read/write heads. There were two of them and they ran the ENTIRE command and control. Radar, weapons etc. Yes, 80 MegaBytes was handling the operating system that controlled missiles and guns. Heh. Shortly after that they got replaced with off-the-shelf RAID solution AFAIK. Oh, and we had a communications device the size of a fridge that had a readout that used HEATED elements because it preceded LED's. In 1997! And... if you really want a laugh I was standing there watching the Signals Operators do a black-out drill and noticed they were shouting out commands like "restart crypto" so after that I asked what they were doing. It turns out the BATTERY had died so every time they did these power blackout drills that had to restart the cryptography device. I said "would you like me to put batteries in it so you don't need to do that?" They said "It has batteries?" The problem had been happening for so long they'd put the normally unnecessary procedure into MANUALS. (and all the clocks were wrong and had been for a couple years. I went over to the next ship over, borrowed the manual that we no longer had then fixed it.)
~300MB/s up to ~500MB/s is a great jump, but what interests me more is the possibility of lower -seek times- latency (thanks for pointing that out seek times on both sbu-drives will be the same, but its like splittling the queue in half)
lower latency for the WIN!
I'm pretty sure seek time will be just the same.
@@jackwhite3820 If I understood everything correctly, it will be the same*
*: the same as a similarly configured RAID. Which should be faster than one drive.
@@jackwhite3820**possibility** if you use a file raid, instead of a block raid, it could be thought of as cutting the queue in half
Seek times are relatively the same on each disk subsystem, however the net effect is twice as many IOPS/TB in the same disk slot vs a conventional drive.
Bonus points for the Floppotron reference. :)
Wendel is such a good thing to have on youtube, knowledge, professionalism, presentation, all straight A's
It's funny how people keep saying that hard drives are dead, but they keep finding ways to keep them relevant, even outside of capacity. I'd love this for the working data drive on my server. I'll be really interested to see the random read/write performance as well.
It's funny how people keep saying that Tape's are dead, still IBM storage makes quite a good money out of them every year...
@@Kanakarhu Even in server related groups, I've been laughed at for saying tape is still quite relevant and is still widely used in the enterprise space as backup. Those people are simply ignorant. They're enthusiasts that think they're super hard core, but don't really know the market when it comes down to it.
@@iaial0 They're incredibly cheap for their capacity. You can get a 20TB HDD for less than the cheapest 8TB SSD. And yeah, they're great for cold storage and backups, but HDDs are still more versatile than that
@@TheGameBench SSD's are great for cold storage. And especially for catastrophic loss backups, because you can get back up and running much quicker. It can mean the difference between being out of business for hours or days, which more than makes up for the extra cost of SSD's. Things like tape also need specialized storage facilities. For SSD storage all you'd really need is to power them up once every year to ensure data retention, and if you really want to ensure data retention, to have a mirror backup that copies over between two drives to ensure the write charge is fresh. And this all can be automated inside of a suitcase sized container that can be stored almost anywhere. Companies only stick with tape backup because they're already invested in it, and because of business insurance purposes. Basically, big tape has the system locked down, ensuring that what insurance covers will be at jeopardy if businesses use anything else.
I ran across a guy saying NO DC or Business is using HDD anymore LOL
SIR!!! I have always marvelled at the achievement of electro-mechanical engineers in the field of Hard Drive Technology. To me, these achievements rank among mankinds greatest so far. An these chaps are don't get much applause.
Just When you think the Hard Drive was a thing of the past It Rises Like a Phoenix Once Again.
"The Hard Drive is Dead! Long Live the Hard Drive!"
until we get mainstream multilayer crystal or DNA storage HDDs should continue to have a place.
Oh yeah, EXOS are amazing drives. Discovered them by accident. Have 2, love them.
These aren't the "standard" EXOS drives though. This is a special line of them within the EXOS drive series.
And the HDD companies hit back on speed against SATA SSDs. This is really cool tech and im excited to see if these will end up becoming affordable for consumers eventually
Looking at the prices, its ~13%, ~$30 more compared to the normal Exos. It also consumes ~2 watts more. If you need performance and space is an issue its good.
If space is an issue but performance not then the normal Exos is fine
If performance is an issue but space isn't then a few 10 or 8TB's instead.
still much, much worse random access speeds, which is the main reason ssds feel fast.
@@GraveUypo this is fair but these will be very nice for large storage systems where the really critical operations aren't random, re-silvering and the like.
Both the storage density and the speeds are extremely welcome and I'm hoping these drop a little in price by the time I'm ready to re-disk my JBOD
SSD's are soo cheap and have soo many upsides, that these are basically a novelty. The main benefit of SSD's, other than speed, is their longevity. Even when they can't write anymore, you at least can still read the data on it. That's a big deal for data storage. Now of course one could argue that you can get more capacity for cheaper than SSD's, but if that capacity doesn't come with speed and low latency, then even in the business space it's pretty useless these days. This is a product that would have been amazing 20 years ago, but is just meh today. It's just too little too late. And the latest NVME's are ultra compact, these things are putting 4TB of storage at 7GB/s and 1 million IOPS in the size of a small thumb drive, at pretty affordable prices.
@@peoplez129HDDs make sense above 4TB
Your enthusiasm and tech geekiness are second to none. Well done Wendell!
Doing this with drives is such a good idea, double the speed and the only downside is a little bit extra to config, brilliant
It's about time we needed some long overdue updates to humble hdd.
It's great that SuperMicro has these for brand new with manufacturer warranty and everything for a reasonable price. Server Part Deals has had refurbished 2x14 SAS and 2x18 SATA models in stock off and on for $150 and $220 ea. respectively. It's probably worth buying these new for data you care about... or use the savings to buy a few spares!
Two or three of these can keep a 10GbE link saturated so they are perfect for a home NAS without a ton of drive bays. Hopefully this will be the last time I buy spinning rust for my personal data hoard. Thanks for the video Wendell!
I've been waiting on these for years and now they are available and I don't need them. Looking forward to seeing what the difference between the SAS and SATA version is.
It would be a shame if your drives started failing at an alarming rate LOL
@@mrmotofy I have spares of the same model in production, and even a bunch of used Coolspin drives that I intended to use for a backup server if I absolutely had to.
Damn... And here I am, just installed x2 8TB QVOs, start xfering over ~3.5TBs of video files, and quickly watch xfer speeds plummet to ~150MB/s once the cache ran out.
i had 4 of those in raid 0 (before ssd prices plummeted) which brought the speeds up to meaningful levels, also because you could write 4 times the amount of data at once till the cache run out. but without striping several of those its indeed very 2010 feeling.
Heh, there was a time, about 25 years ago, when 15 MB/s would have been considered crazy fast and you needed a RAID array to achieve anywhere close to 50 MB/s and here we are looking down at many times more than that speed.
How quickly technology marches on and how rapidly we humans adapt and get used to nicer things..
You had me at "eleventy" I thought I was the only one who said that. Love these drives and I can't wait to get all those "gigglebytes" of read/write speed.
FINALLY someone talking about exos drives
now hoping to see 20 drive raid 10 crystaldiskmark results
I had that WD Raptor X drive and sold it to a collector some years ago... I'm regretting now. It was awesome
I've got a couple of questions -
Are these dual heads quieter than a single head drives during read/write operations, as there's less mass moving and slowing during a seek? Or louder due to 2 separate seeks able to be performed at the same time.
Whats the power requirement differences between single and dual. In the enterprise world, we dont really care. But with SATA versions due, I can see that the "average" SOHO and homelaber who may take note, might care a lot more as KwH=$$$$ these days.
Overall, really cool. However in the coming years they might have a tough market to compete in should flash prices keep dropping at it's current rate.
I have a pair of the 2x18 drives in my workstation, and seeks are definitely quieter than the Exos X18 drives they replaced. Like other helium drives, there is little to no rotational noise.
I would expect slightly higher average power consumption due to controller complexity and the need to move and rotate the platters and heads at the same speeds while using smaller actuators, but almost certainly lower power consumption than two separate drives.
This is super promising and could even be the reason I finally give Seagate a try
Normal Seagate are fine and have been for years. It's their cheap crap you have to look out for. Usually you only find those in OEM stuff though. WD didn't earn this reputation as much because they didn't really do cheap OEM crap (or if they did they refused to even be associated with it.)
Exos in general (not just the dual head ones) have always been fine
@@CreativityNull Fact check: WD is a massive OEM supplier for Dell, Lenovo, HP, and others.
They simply manufacture their bottom tier to a quality above Seagate's mid-tier so they don't get the bad reputation.
@@tim3172 and more expensive in more expensive models (at least for the spinning drives when they used them awhile ago in these devices out of the factory.) As far as I know though, that's mostly for SSDs currently, not for spinning drives except rarely in the past, which is where Seagate has the bad reputation. Seagate got the reputation over a decade ago and it's not even relevant anymore since the low end stuff isn't using the Seagate spinning rust and is instead using eMMC.
it amazes me that the air current (well, helium to be precise) from one head moving doesn't disturb the positioning of the other head. Or, how do they know that certain seek patterns can't create an oscillating gas current that could amplify and cause errors or even a head crash.
No, you were correct at 'air'. Drives are NOT hermetically sealed and all that is in them is air.
@@quantos8061 No, these drives are helium-sealed.
@@quantos8061 it literally says on their specs "hellium"
The two actuators sit on the same actuator axis, so the upper actuator has the heads for platters 1-5 and the lower actuator for 6-10, say. So interference should be very low to begin with. This also means that these might be scaled to three or four actuators, because the footprint doesn't really increase.
@@deneb_tm No, they are NOT hermetically sealed. Take one up in a plane, if it's hermetically sealed the drive will bulge.
This is an improvement in storage density. Nothing else.
All other advantages are already available thru software trickery and multiple separate HDDs working in tandem.
Yay!! Ive been hoping you'd cover these!!
Oooh, this might be very useful indeed! Gonna have to do reading on these
Oh I thought someone had finally made the idea I had probably 20 years ago of multiple heads per platter. This doesn't seem much cleverer than jamming two drives into one box, especially with added complexity of having to figure out which platters are on which drives. An interesting idea for sure, but feels like it needs more development time.
The 9mb seagate is awesome!
Those old tech times were honestly way more cool than nowadays.
Note: I bought my first PC with 40MB seagate drive in a 80286 12MHz PC's. But I liked those times A LOT!😊
how much did that beast set you back? LOL
Probably wouldn’t get you much change out of $5K back then, I reckon
Amazing throughout for spinning media.
Curious if you could scale this up to 4-6 actuators, that would prolong the life of disks like these by a lot
Not in the 3.5" form factor, there's not enough Z height for all of the magnets and actuators.
If they brought back the 5.25" drives, they could scale the capacity way up, but that'd present a different set of challenges.
Also look at the arc of the head movement. You have a density limit there as well and they cannot impact the laminar airflow that the heads ride upon. I'd imagine more heads already made that smooth air buffer choppier. I wonder if anyone has revisited Bubble Memory. It actually shipped in the early 1980's - terribly slow access times but the density was stunning. I think the great granddaddy of the modern clam shell laptop, The Grid Systems Computer, shipped with bubble.
If you want 4~6 actuators, go to the market today, buy 4~6 HDDs, slap them together in a raid config, and you have your "4~6 actuator" HDD ready in your hands today itself 😂
@@GGigabiteM 4 would be possible if they could get mirrored sets of heads. Have 2 stacks of 2 actuators. One stack doing the tops of the platters, and one stack doing the bottom of the platters. You would likely loose a platter due to needing enough height for the arms to overlap. With a helium drive it shouldn't be too terrible of a loss.
Thank you Wendell, love your work as always.
Wendel your content is fantastic, keep it up!
Very cool I still remember the Excitement when i got to play with my First Raid card and 2TB SAS drives for the first time, as good as SSD is there will always be a place for Mechanical drives.
Always wondered why we never added more optical sensors for disc drives to read more per revolution (or why we never continued to improve that working prototype 500gb bluray from the 2000s). Glad to see improvements yet again but it does feel like were abandoning formats that still got potential.
MiniDisc does exactly that: magnetic write, optical read.
Already happened, the Kenwood CLV 72x CDROM with seven sensors was 1990s tech
@@shanent5793 Huh TIL, that idea for splitting the beams is pretty neat and it might do wonders with modern advancements and components, I would also think there'd be issues on read speeds with normal discs as that kind of laser set up would need discs to burn data/programs in a specific sector layout designed for said lasers to read more data. Also the only Kenwood I ever saw was my dads Record player (Kenwood KD-2055) so that was an unexpected brand name to see for a computer part haha.
Mechanical drives are like the Undertaker, every time you think he's dead he comes back to life with a new trick.
'91 or '92 I bought two HP 2GB drive the same size. It took forever to spin up and would vibrate the table. It had a chime when it was done spinning up and ready. Sounded like an airplane fasten seatbelt chime. I used an old sturdy XT case and ran them externally using IDE cables out the back of the main system.
We need to get Wendell to exabyte amounts of storage. Make it happen.
Why stop there? Geopbytes or burst
I'm surprised Wendel and Linus haven't done this already I know Linus has needed to call Wendel on some of his networking and servers it's time to help the legend with us own petabytes project.
@@shadowarez1337 I mean poor ole me is over 10TB used. That's without having vast video archives, and having most of my games uninstalled.
@@MinorLG I'll be posting on forum's finally got things coming together and 8x18 isn't enough plus I made the mistake of making a JBOD instead of a raid 5 with 2 hot spares I have over 45TB of data so this server will backup the main then I'll wipe it recreate with Raid 5 make 2 drives for fault tolerance.
As someone who worked on an exabyte scale Hadoop cluster, it literally takes thousands of machines in a data center. An exabyte is 1mil TB. Or 55 thousand of those new 18TB drives 😲
Looking forward to the video on the SATA versions. Fingers crossed when it comes that they're decently priced down in Australia, because that could be great for some things in planning soon
Oh, I was thinking they'd be on opposite corners to each other, so was wondering how they'd done it with the rust still at one end of the chassis.
Only at the end when explicitly shown did it click that they're overlapping eachother. Did not expect.
there is not enough space to do that
@@marcogenovesi8570 Hey, when brain's gotta fart, brain's gonna fart. Physics be damned!
For Ceph with host level failure domains, this is a drop in replacement, since a single drive failing is still 2 OSDs on a single node.
It’s been too long since I’ve seen a video from you guys glad I got the notification
Eleventy billion. Well done sir.
i dont understand everything yet but i keep watching until i do. and reading else where of course.
Other than the amount of vdevs involved, are there any differences to doing the two raidzs vs first raid 0 - ing the drives and putting them into one raidz?
One benefit of doing in with the two raidzs is that in the event of half a failure, you would not risk the second vdev at the same time, but that assumes extra slots. I'd be interested in seeing any performance differences, you'd avoid doing two expensive striping calculations, but you'd have to do a whole lot more "splitting"
Great for us Ceph users!
HP had dual-actuator drives about 20 years ago, but they didn't catch on. Maybe they were too complex to control/integrate into RAID systems at the time? Too expensive? Too locked into a proprietary ecosystem? I dunno, but I've often wondered if the idea would crop up again.
The thing that occurred to me at the time was, "that would really speed up file-copying on the same disk: not having to seek back and forth between the source file and the destination". At the time I was thinking about tasks like defragging, where lots have data has to be moved from one part of the disk to somewhere more appropriate. =:o}
Interesting to see what is on their website: "MACH.2 is the world's first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently"
Based on your comment I wonder if they are wrong with that statement...
OK, found something: Conner Peripherals (which became part of Seagate) in 1991 announced dual-actuator drives.
Chinook had 2 sets of heads on each platter. This has one head per platter, and is essentially stacking 2 HDDs in one case.
@@autohmae Interesting... I've just tried some Googling, and can't find any record of the earlier HP dual-actuator drives. Did they never actually make it to market? =:oo
I learned about HP's design from a big show-off display that was in the lobby of one of their sites, where a friend of mine was working at the time. The images showed an elongated drive housing, with an actuator assembly at each end, accessing the disk from opposite sides, and the text talked about how innovative this was (of course!) and how much it could speed things up.
Was I taken in by some CGI concept-art for an un-realised product, I wonder? (If anyone could do photo-realistic graphics at the time, HP certainly could! =:o} )
Looking at the images Seagate are sharing for how the Mach.2 works, they've got the two actuators stacked one on top of the other, with each one only able to address half the platters - hence the drive showing up as two separate devices. The HP design could access all platters equally, which would surely be preferable? But of course you then have the extra length of the housing to fit the extra actuator, which maybe made the product just too big to fit in existing machines. Certainly a 3.5inch drive would end up being as long as some of the early 5.25" optical drives, and have to be put in a 5.25" bay if there wasn't plenty of clearance at the back of the 3.5" bays... And that's just thinking about cheap consumer cases. Servers, with their rows of snugly-fitting exchangeable drive sleds, would be a whole different ball game (i.e. a whole new design required). But back then, SSDs were just a glint in their daddy's eye, so *maybe* it would have been worth it for some customers to invest in a whole new HP server, built to take the longer drives (and with the necessary "extra clever" controllers)...?
(I think the pictures I saw were of a 5.25" design, BTW, but I could easily be misremembering.)
I think we need an HP employee in this thread, stat! =:o}
@@autohmae [BRAIN SPARK] the name Chinook rings a bell... [SCRATCHES HEAD] ... But why would they have had details of a Conner product proudly displayed at an HP site? =:oo
Now I'm totally baffled!
@@therealpbristow Compaq used to have a tight relationship with them and even investor... and HP bought Compaq... so that's the connection ?
So these show up as two independent 9TB drives each? Does definitely take a bit of thinking to optimize the performance and reliability, but really cool technology!
New Video!! Yay
I have 2 of those Seagate drives, and 1 still works if you give it a quick flick of the wrist!
Very nice! And yes, it makes perfect sense to do it this way.
I am curious how this will work with SATA. As far as I know, you can't have multiple devices on a single SATA link, like you can with SAS and an expander chip. Will the drive have two SATA connectors perhaps?
its one big lba, split right down the middle. helper script to help you setup partitions on the level1forums
I wonder what this does for reliability. How often would you lose one head & not the other? In consumer land 18tb is too much to lose to one disk, but 18tb is also too much to back up
Hey Wendell, exciting stuff, could you let us know about the power consumption? Also, I'm curious about if one was to mix the 16TB drives with paired 8TB SSDs, would that configuration improve performance if at all?
Spec sheet says it's double the power and double the price.
only pairing that makes sense is using the ssds as a cache layer for an hdd array.
Also could be a good first device in a surveillance feed archiving workflow. 18TB also happens to be the capacity of an LTO-9 tape.
i was hoping it was gonna be 2 full sets of read-write heads, so that you could perform 2 parallel IO operations on the same disk and double the random IO performance. i was big curious how they fit a whole second arm mechanism in there but no, they punted on it and just split the one actuator in half :(
what i don't understand is how this is a speedup, it's the same number of read-write heads traveling over the same surface area at the same speed. like the same number of bits per second are passing underneath each head regardless of if they're moving in sync or not.
Very interested in this. I saw back in Feb press release for ultrastar dc hs760, but haven't seen much news on them since. Wonder how these would work in Unraid.
what's the max IOPS? Latency? Bit error rate? Sequential speed means nothing at 500mb/sec if the competition is doing 15gb/sec anyway
I'm glad I caught this video on Mechanical HD's...and not ssds or m.2............they simply don't have the capacity........I really appreciate your presentations. I, clearly do not understand HOW these hd's actually function, but I have lost enough data over the years to bad PCB's I have a valid question, (I think) Perhaps you could explain why the Mfgs. can't seem to make any sort of uniform replaceable PCB for their HDs? how many Petabyte of data are lost every year because "donor" drives can't be found? or the technical skills of a solderer are lost on a microscopic resistor or capacitor? I'm sure you could supply a short informational vid?
I sometimes think Wendell is the only guy worth talking to if one needs to discuss technical things inside IT tech.
These days it's way to much marketing blabla... Everyone is trying to hide the obvious red flags and design fails..... They will happily sell you a consultant... A guy half your age with less experience that will point out that the problem you describe is rather normal behavior and can only be solved by throwing more money at the problem...
What I would love to see is how draid arrays in zfs handle these as parity is distributed differently.
I managed to get my hands on a SATA mach.2 drive. It doesn't show as 2 drives, only as a single 18TB drive. Did you make that video about the SATA version?
I was wondering about that since this video is a year old now
6:11 is it possible to mirror the top halve of the drive to the bottom halve. then take those 12 mirror drives and set them to raid z2?
Would raid0-ing the 2 halves of the drive to make them behave like 1 fast drive then combining them into raidz2 be better or how would it be different than 2 vdevs of raidz1-ed half-drives?
I've been curious to know the price of these since they were announced a long while back, but I can see noe they're quite pricey (as would be expected). It's cool technology, but it still can't replace SSDs in terms of IOPS and random performance.
I wished they put 4 actuators in the same platter, would give insane amounts of read write speed and insane seek times with the heads having to accelerate and decelerate less.
@@fss1704 I'm sure they could in the 5.25" form factor.
I ordered version x24 from Amazon. Is there more power and speed?
Thank you tech 1
that Dell monitor is FILTHY. Love it
EXOS are INSANE Drives!
Not only Europe uses the metric system (part of the SI system), all the continent of America (except USA) have used it for a long time, to be fair, in fact almost the entire globe!
4:40 Resident idiot here: Why do you recommend to put in the new drives before taking out the old ones? Just a ZFS specific thing?
why do harddrive manufacturers not just make the actual write head have like strips of side-by-side read/write heads, that could write and read parallel on-disk lines? you could build them with extra heads so that they could account for the fact that a head that's slightly further in towards the center or out towards the edge would account for that distance change. either by being slightly bigger, or having more read/write heads. 5 parallel needle-ends = ~5x faster. as you increase density you could end up multiplying your read/write performance since you'd be able to add an extra read/write head whenever the density really increased
If I've understood your suggestion correctly, the reason why they don't do that is because of the math's/geometry of circles and lines.
The tracks/lines that are on the disk platter form an imaginary circle, whose center is the center of the platter, and whose perimeter is the track itself.
The further away from the center of the platter a track is, the greater the area of the imaginary circle, the larger the perimeter that imaginary circle will be.
The read/ write head is essentially a single point. It can be moved anywhere along the platter in any fashion, and read from any given track.
When you have a stationary single point intercepting a rotating circle, that point will always stay the same distance away from the center of the circle, and for a hard drive that means it will always stay on a track.
Introducing more read/write points changes the geometry of the read/write mechanism from being a simple single point, to some other shape, either a straight line or an arc of some
kind.
If we use a straight line of 5 points for example, then only the original point can be guaranteed to be on the right track. The four extra points will be further out from the center of the platter, and so they could cross over into the tracks that are further away from the center too.
If we decided to use 5 points that were placed in an arc formation to mimic the circular platter, they would only be able to reliably read from tracks that form a circle whose perimeter shared the same arc as the read/write layout.
The only way to make that idea work, is if you found a way to actuate the read/write points themselves, instead of just actuating the entire head.
There's an even bigger problem though. Lets say you've magically created a read/write assembly that is able to move the read/write points on the head dynamically as the head moves up and down the platter. And the result of that breakthrough was you could read 5 times as much data sequentially through the head....
The only way that breakthrough would increase read speed, would be if you also increase the rotational speed of the platter by 5 times it's original speed. That would mean having hard drives in your computer or server rack that have an RPM of 50,000 or more!
That would make a lot of noise, consume a lot more energy, cause heat problems, wear out the platters faster, possibly damage the PC or server, or cause it to fall over. And the hard drives would need to be able to keep up with that speed without any errors, which is hard to make happen.
You would have all those problems, and still, you wouldn't be anywhere near as fast as a modern SSD which can be bought for pretty cheap these days.
Basically, the engineering behind HDD's is already so fine tuned at this point, and more progress on HDD technology is becoming exponentially more expensive, the rewards shrinking, and the SSD has already achieved much better results.
@@firstnamelastname-oy7es i don't think this would be an issue. think about this. over time, the track density has increased obviously by many orders of magnitude over time. originally drives like the ibm 350 had a density of 2000bits per square inch. drives nowadays have densities of 1Terabit/square inch or greater.
Thus we have MANY more tracks in a modern drive than in a drive back in the day. it would simply be a matter of making a multipoint read head in a strip that has the furthest tip of the strip with multiple points (or instead of a strip, you'd have say 3-5 read heads, all in a line, (or more) and you could write to multiple heads at a time. essentially, you could format tracks differently, whereby tracks were thicker than they used to be, but files and data would get broken up into 3-5 pieces or more. or simply have 4 or more arms with read head strips writing to the drive at the same time (say at cardinal points on the drive. perhaps you could have the read arm be a straight bar of metal passing over the entire drive, fixed in place to the chassis of the drive even, with multiple drive actuators along the arm.
Wendell: "Easy peazy!"
Me: *brain smoking* "Bzzzzt!" *sounds and smells of crackling bacon* "ERROR 420!"
My drives(in Ralph Wiggum's voice): "I'm degraded!"
Oh boy. I sure hope we get SATA 4.
I guess we also need raid z4 and z6 now.
I think if they REALLY tried, we could an independent actuators per platter per actuator stack. 12*2 for iops potential. 24x random iops. Still no where near SSD's, but WAY better.
Good enough for a Media storage array. You don't need PCIE speeds for watching a 4k movie other than for convenience of backups.
Is it just me or does it seems like there have been so many advances in HDDs lately? I had thought with time we would all just go SSD based for everything as a "simpler" solution, but I find myself still buying HDDs all the time because for large media storage, or just storage that is not speed critical, they are king.
Of course. All those crashed UFOs were good for something.
They have to innovate to keep up with SSD
I'm sold, when can I find them in the PC stores?
Ebay
I don’t have hands on experience with zfs, so I don’t know how wild/bad my idea would sound, but here it is: can we, per say, create on each drive it’s own “raid0” aggregated meta-lun from “halfs” (so we can treat it as one fast disk) and then create raidz1/2/3 from these fast meta-luns? From my perspective it will simplify handling of raid in case of drive errors
Edit: now I reread it and realized what I’m proposing is basically raid 0x ¯\_(ツ)_/¯
no, redundancy in zfs is handled at the vdev level so at the lowest rung of the disk pooling shenanigans. The pool is just joining the vdevs it does not add rendundancy
To make an example:
You make a pool with two mirror vdevs, and that's kind of a RAID 10 but not really.
You lose one drive, you are fine because you have another drive in the same vdev.
You make a pool with two spanned vdevs. You lose a single drive so now one of the vdevs is gone and the whole pool is also gone.
I think RAID60 might be the best way to use these, this covers the 2 statistical HDA failures among the entire array. You get the performance of striping and still only 2 HDA overhead.
This seems like such a natural progression of the tech so why wasn't it done ages ago?
I've been wondering for some time why mechanical drives are still using moving heads. Is it not possible to design a read/write head that is a bar that spans one side of the platter and doesn't have to move?
This seems like a logical evolution, assuming that it's not physically impossible. It might take some engineering R&D but reducing the moving parts on such a sensitive and critical device would be of paramount importance.
Physics is against you, there The minimum size of head to create a sufficiently focused magnetic field is several times the width of the track you're trying to write, so to give every track its own fixed-positon head, you end up having to fill the entire drive housing with heads and *still* can't get as high an areal density as an actuator-type drive.
@@therealpbristow I was thinking (in a conventional sense) that if there were discrete heads in the sensor bar then obviously they would have to divide the total number of platter tracks between them.
In that scenario perhaps there would be a way to differentiate/focus a head onto individual tracks. This is where some R&D and innovation would come in.
However, perhaps there is a way to make a sensor bar that doesn't need discrete heads but could sense individual tracks and modify them anywhere along the bar. This is purely imaginative speculation as it would require a new type of magnetic sensing technology to be developed.
I noticed the two heads are on separate platters.
Would the SATA version of this drive work as intended in a Synology 5 bay system?
awsome post,
however, I can not find any source for recertified/refurbished MACH.2 Drives
Because it's new, refurbished drives are old drivew
That "30 year old" Seagate holds 9 GB, are you sure about that? I have a similar model (size wise) that just holds 40 MB, and it was a top of the line HDD back in the day.
Imagine if we had 5.25" HDDs today.
Based off current todays bit density we could get 40TB per drive.
Would be nice if SATA SSDs came in the 3.5" form factor. I don't think I'll ever be able to afford to make the 288 TB in drives I have now solid state, ever.
@@timramich Reason they don't is market share wont be high enough.
2.5" can be used in laptops and desktops(with and adapter plate)
Heck we don't even need the full 4" of length on 2.5" SSDs.
If you open them up the PCB inside is only 1" in length at times.
It's pretty simple to go fast, it just requires a quicksort combination as such; 9 - 8 + 7 - 6 + 5 - 4 + 3 - 2 + 1 (and '0' is a stop bit in general, its a universal marker, because if a bit is ever zero, are plans to go fast are brutally foiled) So, lets start easy, a seven segent combination in quickort, this is actally a jacobian determinant matrix operation in chapter 9 from Calculus 4; 111 110 101 100 011 010 001 (and 000 means stop) it;s very effective because the floating point and integer operations are rated at 80 gigaflops per core.
I wounder if each read head could be controlled independently
Wendell & ZFS is like, "Say the line, Bart!"
Neat... so this shows up as two 9TB disks?
yeah
This was my question as well. I saw an Intel AIC 4TB or 3.84TB P4xxx card the other day that showed up internally as 2x2TB drives as well. Basically an 8x PCIe card that split internally to 2 U2 devices basically. Figured these would look like that too.
Can't find these available in Europe anywhere.
segate missed a trick here, this is essentially two drives in one package. Cool? Yes, very, however could just spend extra and get two drives for better redundancy as explained in the video. But what would have been REALLY cool would have been two actuators that read the same platters. That would mean that you would still have twice the read speed half the seek time (probably more since the platters would have to be smaller) and also twice the redundancy in some instances as you can just park a malfunctioning head and use the remaining one. Im not an engineer but I don't see why that type of drive isn't already widely used. I know that they have existed in the 90s but never really went anywhere and thats about it, though would be cool to see if its possible or why its not if it isn't for whatever reason.
two actuators on the same platter don't fit in a 3.5''. You would have to use one arm on one side of the disk and one arm on the other to avoid collisions. There is no space for that unless you REALLY reduce the platter sizes and then wtf are you doing at this point
@@marcogenovesi8570 I did mention the platter size would have to be reduced a bit. Though, I don't see why that would be too much of an issue since most sas drives have smaller platters for better seek times. Or at least they used to, not sure if they are still like that.
Hmm wouldn't these theoretically have twice the failure rate because of twice the moving parts?
Id be curious to see how these drives perform in a Ceph array. Even if one half of the drive dies, it could help a datacenter limp along or have higher levels of redundancy in the same hardware footprint.
Well done Seagate.
Are these even available to regular consumer's been keeping a eye out for some since I can get a full Epyc system for just under $2000 figure time to build a good server now. Invest in proper great HDDs.
So... can we get a MAMR version? Boost that puppy up to 24GB+ per drive
I love my hd's even when they fail :)
it phyisics :) meet mechanics !
Eh, I would have thought this'd be firmware managed dual actuators. I guess the SATA ones are?
I've been doing some research for a couple of years now.
Maybe you can help me out on paving a path to go?
Why haven't hdds had parallel read write heads?
Are they coming out with a sas4 version?
It's a modern Quantum Chinook!!!
For something on the drive itself, single actuator with dual surface striping ought to be feasible, buffer 2 tracks in a cylinder and read / write them simultaneously, much more feasible for drive logic to handle.
Does 45drives use NetApp 3,5” brackets. When you said 45 drives I see all NetApp brackets.
that's a Netapp disk shelf, connected to a SAS card in a 45Drives server
@@marcogenovesi8570 thought I’d recognized NetApp in the Netherlands I’ve never seen a datacenter with 45drives equipment. So I thought maybe they’ve stolen intellectual property.