i wish theyd just make super large 3.5" SSDs ...i think they make a lot of sense for budget consumer market who needs low cost and capacity but where density is less of a worry
@@shadow7037932 space would be the same after adapting. With smaller drives you get better reliability since you use more drives for the same storage (if you’re using some kind of RAID/Z ofc)
at work we use the precursor of this drive for backups. mostly bulk seq write with large blocks and they are fantastic for that, plus with the fast random read the restores are very spiffy.
@@ServeTheHomeVideo I thought all flash has a limited shelf life if it's not powered on regularly, any idea what it is on something like this? (as compared to tape or an hdd which has a much longer or maybe even infinite non powered on shelf life assuming no magnetic interference). tks
I haven’t _really_ added it up, but it may cost more than the total I’ve spent on computers since I built my first PC in 1985. Ok, after adding in my head, probably not; it seems to be about $12k. But still!
I guessed $5k. Yay I won! 🎉🎉 Lol would a nice game library if you want more games than you'll ever be able to play. Could hold the last 1000 AAA games released.
Capacity in storage becomes a metric for rack usage, that is, the number of server racks dedicated to WORM tier storage. This in turn becomes a metric for power usage, cooling requirements, and datacenters needed to fulfill client needs. Devices with specific use cases are more useful at scale than in "typical" workloads. In our cloud hosting, we have noticed that customers only grow data slowly after their initial ingress. This means our storage is somewhere around 90% read and 10% write, where writes are both overwritten data and net-new data. This metric will likely increase as the number of customers increase, until it becomes asymptotic.
Well, databases don't grow much or at least not rapidly. That leaves document storage, which grows only slowly when one's got such high volume storage available and most documents would be pdf based, which is compressed. Where I could see it growing more rapidly is in specialty environments, like astronomy or a hospital diagnostic imaging department. I'd hate to have to try to back an array of those to a hot site though!
@@unhandled12345 At $200 each those are consumer drives without PLP. You also would need to connect those drives, probably using a Broadcom Tri-Mode 9670W-16i or something like that so that is $1500 before you get the cables to connect that many 4TB drives. If you wanted to get more performance, you could connect all of the drives at PCIe Gen4 x4 but then you need 64 lanes just for your array assuming they are direct attach U.2 drive bays instead of the Broadcom RAID controller. Your reliability is considerably worse as well if you did a 16 drive RAID 0 because not only are all of the drives failure points, but all of the power and data path connectivity to each drive. That Broadcom controller does not have room for more drives, so you would need a PCIe switch NVMe chassis. You also cannot do more than 12 U.2 drives in 1U, so you are now double the rack space. 16 drives is a big number, and you have to add in the cost to attach drives.
@@spvillano I could, though definitely in a proper data center and not a homelab environment. I might just start saving up for one to add to my drive pool as tier 1 storage. Wouldn't be too difficult, I'd just need to get a U.2 carrier card. By the time I've saved up for it, it'll probably be less than half the price so that's a bonus.
At 61.44 TB, 0.58x writes per day averages out to 412 MB/second or 3.3 Gbit/second. With 65 PB written as max endurance, that works out to 1,824 days of continuous writing at 3.3 Gbit/second, or two days shy of 5 years.
I've had my eyes on these since they got announced, I want one but they're so expensive, but when space and weight are on your priority list this is what gets you excited
Lately I've been experiencing insane unreliability by enterprise grade SAS or NVMe SSD's, they are croaking like crazy, from completely dead within few weeks (dead flash controller) to gradually croaking with gradual loss of useable bits per flash chip. All under full warranty but we're losing them faster than we can replace them. Luckily spinning rust and old SAS drive racks are still there to handle the failovers, but this new stuff is getting out of hand.
Now I am interested in a deeper dive of a comparison between HDD that is made for Write once read many like security cameras and the like. VS SSD with all sort of different tech from SLC to as high as the market currently offers. As you do bring up an interesting point regarding reliability of storage. As someone who is a paranoid as me I still rock the spinning rust for its tried and tested reliability or at least the fact I don't worry about the seemingly random death that SSD is known for. The spinning rust has enough bells that I get some time to replace the drives. Also, loved the extreme testing of vibrations! Didn't expect a drive to die this quickly! Thank God that I don't have my drives in such intense condition. But I do know that just by having 2 drives sitting next to each other is a risk in itself let alone a NAS configuration of multiple of those rusty boys.
I suspect shingled technology is on it's way out, with Seagate having released their first HAMR drive recently while both WD and Toshiba have announced upcomming similar models.
I don't know. I have Hard Drive's still working with over 95,000+ hours (10 Years) on them. I also have several SSDs start failing in 35,000 hours. Overall, my SSD's live half as long or less. Something to consider I think... Thanks for putting these out. Please keep them coming :-)
I think there are a lot of people who do video editing and such that will think about 2-4 of these in a system. It costs a lot, but if you make a living off your system it might be worth it.
@@ServeTheHomeVideo I am technically employed as a video editor right now ... Maybe that's the mental justification I need. I would want to store a lot of videos and photos too, mainly store. Which shouldn't cause a lot of wear. the language model cache will be a dedicated drive, as that isn't data that's important - just useful. I have to lookup prices and also make some calculations on how long 31TB would last me. Since that sounds a bit more reasonable. PCIe lanes are the actual limitations for the workstation I dream of.
I'm not worried about write endurance, more about long term data integrity and durability. I'm looking to have a large capacity, but portable small form factor long term or archive storage solutions. I have over 100TB of personal data, plus 100TB backup split across 40 x 5TB HDDs....so consolidating all of that into just 2~4 small portable storage devices is a god send.
Just get them. Then you have 3 back-ups. Store 100tb of the drives in a lock box at the bank and re write to them every couple years. The other 100tb at home for easy access. Makes sense if you have like 10 to 14 grand to burn. But don't trust the ssds for reliability and thee backup.
Like any storage medium you’ll have to rewrite these periodically, perhaps a bit more often than HDDs, but other than data rot, overall MTBF should be a lot higeher thanks to so many fewer devices. (Dang though, that’s a *lot* of personal data. What is it if you don’t mind me asking? Big video files?)
Finally a drive which can take care of my totally-not-pron collection. Of course, I guess for its price one could hire a few real people to do stuff with for a long time
I have an 8tb Kingston data center U.3 drive in my gaming/workstation pc. I’ve got it mounted to the exact same star tech adapter board too! It’s fantastic!
@@ServeTheHomeVideo Compiling a big project, the U.2 drive is noticeably faster to complete. Where would I see the difference in random IO performance?
I actually really like Solidigm, the m.2 boot drive in my computer is a Solidigm 1TB SSD, so I would gladly buy this SATA SSD, if it wouldn't break my bank. This would be great for people looking to store lots of data though, like people who fly drones equipped with LiDAR or Hyperspec sensors.
Costs more than my 2020 build. 3950X - 64GB ram - 5700XT - 2x2TB Nvme - 2x4TB SSD - 1x8 + 1x16 Internal HHD + 8x8TB external HDD. Would be nice to have 6 of these bad boys in a my daily even if I couldn't fill them. DOH the back up time lol. Oh well roll on the lotto jackpot then may be I could try!
@@unhandled12345 I indeed do. I'm incredibly broke yet use only SSDs for backup. They're just cool. Idk man do you have something that just interests you even though it really just kinda exists?
I needed 20tb exos SSD equivalent anyways because of the recent reports of exos drives failing I can be assured that there will be an alternate SSD solution like this one that I can swap my HDDs with
I find the drive details page interesting as these seem cleverly tuned to show how awesome the drive is. Micron 7450 pro has a capacity of 15TB and an endurance of 28PBW. so the drive endurance is nearly 2000 full overwrites. Solidigm has a capacity of 61TB and endurance of 65PBW, roughly about 1000 full overwrites. In theory, adjusted for size, 4 microns 7450 outperform a single Solidigm in almost every way..? Granted, chance of a disk failure when you have 4 of them is climbing, but with 4 drives you can already think of a RAID with parity or redundancy. Not so much with a single drive…
You are right the Micron 7450 Pro is a faster drive. The point of this is that it is high capacity not that it is the fastest. In a server when you have say 12 front drive bays it is 180TB vs 720TB per U which is a huge difference
I keep Tweeting at SSD companies offering to certify their big ssd drives with my project mergerfs if they send me a few but they never seem to respond 😆
@@ServeTheHomeVideo I only recently started to have to care about drive endurance and longevity professionally, and I'm just so happy I don't have to care about HDDs. Especially since we are deploying to high vibration environments.
How come nobody mentions 3 month power off retention (3:35)? So if I put the ssd on a shelf, my data will be gone after 3 months? Scary 😮 On another note, I love your excitement, Patrick! It's one of the unique features of STH videos and it also gets me enthusiastic about the things you're presenting 😁 Keep up the good excitement!
Thanks. I think a big factor is that the idea that you pull an enterprise drive and do not power it on for 3 months is not one that many can relate to. These are meant to run 24x7 instead of being written to then shelved for months or years.
@@misku_ 👍 "3 month power off retention" . . . I suppose such a time cutoff is where they felt comfortable from a nine 9's perspective. But it raises the thought that only a few weeks unpowered might put bits at risk. Perhaps refrigerate while unplugged to be more sure? Dunno.
Nimbus did release a QLC 64TB version of the drive that was only $11K, which considering that was nearly 4 years ago isn't awful at $170/TB. This drive at $88/TB is competitive with current performance consumer drives.
I was in that exact situation recently, thinking about a workstation and a NAS with much higher than average storage requirements while still keeping a more or less compact layout. 4 TB M.2 drives aren't the solution, they're too expensive per TB and require to many PCIe lanes per TB as well, even if you think that you have an abundant amount of PCIe lanes, for example in a Threadripper platform. Turns out, you do not. 8 TB M.2 are a step in the right direction in terms of density, but they're few and far between and exponentially more expensive than 4 TB, so they were off the table as well. U.2 was the next logical step to look at, the cheapest 15.36 TB solutions were much more attractive AND they also leave you with a clear upgrade path. 30.72 TB already existed and it was clear that 61.44 TB was coming when I started thinking about these systems. That means that if you start with 15.36 TB, then at least 4x is possible with changing the drives and without doing anything to the platform, it certainly makes it much more interesting. At the moment I'm testing a few PCIe adapter cards for mounting up to 4x U.2 directly to the card as well as one like you showed that uses cables to connect the drives. Bifurcation is obviously a given and this all looks very promising to enable very high density storage builds.
This is exactly what I've been and wanting and waiting for forever. HDD's fail, constantly and regularly. It's nearly guaranteed. It's not "if" a HDD fails, it's "when". SSD's are simply more reliable. It's not that (high quality) SSD's don't fail, they just fail *less.* This is the answer to long term data archival - measured in decades. The future is awesome.
I suppose drives like these would be great for what basically boils down to ROM / WO-RM use like screaming services do. It's most important aspect would be random read speeds to supply enough data to those watching/listening to all that is stored on it. And there a bottle neck would arrise as it holds more data than the random users can pull out at a single time.
Data retention is the key problem for QLC, not just endurance. I won't risk my 61TB data to store in one single drive. BTW, for a 61TB SSD, it maybe more interesting to see this drive's over-provisioning & whether it's a DRAM or DRAM-less drive to be uncovered.
I finished migrating my TrueNAS from 8x8TB HDDs to 12x7.68TB SAS3 HGST SSDs, and claimed my house "spinning rust free"... and now they feel old already. 61.44TB NVMe in a single drive! Almost the same capacity as my entire usable space with 12 disks.
When "like" is every noun, verb, adjective, adverb, prepositional phrase... Still, I have to give props to a channel that puts a data center into a rolling urinal. *RESPECT!*
This is a replacement for tape. A way to store archive data yet have quick access. Backblaze will probably love these drives. I guess mechanical drives don't like "drop kick me over the goal posts of life."
@@jfbeam Back in the 80's, 90's, I remember Tape Operators had to exercise tape. Read and rewrite the data to a new tape. Also the tape layers would sometimes stick together.
@@CaveSkiSAR Don't get me started on the widespread usage of 3 levels of tape backup - daily, weekly, and monthly - ALL of which had to be refreshed every so often.
@@CaveSkiSAR Actually, I was Internet before the Internet actually existed. I never had access to ARPAnet, but I WAS active on UseNet for at least a decade before the Internet as such was created when Congress opened the DARPA-type network to commercial usage.
I might be alone on this, but I wish the industry would churn out some modern 3.5 SATA SSDs. I own a couple of older enterprise disk shelves, and being able to populate those sleds with massive SSDs would be really cool. Sure I could put some 2.5 inch drives in them, but that feels like a waste of space 😂
@@ServeTheHomeVideo For my use, it would be cheaper to buy a bunch of lower capacity ones as they would mostly be used for local video streaming. $5.5K is a bit out of my price range granted I spent $2000 on used Threadripper Pro 3955WX and Asus Pro WS WRX80E-SAGE SE WIFI.
even with a decade or two of hording SSDs and hard drives im not even close to what this drive has as storage alone. imagine having that in your notebook.
Hopefully someday The fact that most notebooks still fall under the 1TB mark is beyond pathetic. Even if they came with 8TB I’d still see that somewhat meh.
@@LtdJorge cloud storage is even more expensive I pay around $300 a month to have a dedicated remote server with around 100TB. Normal cloud storage is ok for sharing a few files but a lot of those services cap out at a measly 4TB.
The Hynix branding effort I think was successful. No one ever talked about hynix ssds but now everyone is talking about solidigm. I forgot Hynix acquired Intels ssd division so as STH said these are more from thr old intel side
As I recall, Backblaze starting giving endurance info on the SSDs they use for boot drives about a year ago. The would NOT be interested in this drive for their data drives, TOO EXPENSIVE. They MIGHT start looking at SSDs for data drives in a decade or two, if recent price progression continues and SSD finally passes HD on amount stored per dollar.
As a nerd's nerd, my first thought when I heard it was a 2.5 was, "and it'll fit in my frickin laptop...with LA-SERS, Mr. Bigelsworth!". Need a laptop with a SAS controller, lol, but a nerd can dream.
It would be cool to see videos on the feasibility of low-redundancy SSD based NAS configurations: I could 100% justify the cost of swapping 6x 8TB HDDs with 1x ~30TB SSD in my home NAS, if it means I ditch the noisy 4U in my office to a silent NUC (networked via 10GbE). Specifically, problem I see is (lack of) ZFS error correction. If the whole drive fails I can have an on-site copy in the garage (which does not need 10GbE for replication). I kind of assume I benefit 2:1 ratio in a RAID-Z2 configuration for ZFS's error correction when one drive's read fails the parity check (bit rot, a loose neutron, whatever). This is a home NAS, data needs to sit around for decades where 99.99% of the reads are scheduled scrubs, and I don't have a good sense for how much this actually happens with SSDs. I've definitely lost family photos to bit rot over the last 25+ years.
Can't afford those, but I did pick up 4 new old stock of intel D5-P4326 15.36TB drives. QLC, yes. . but I love them as bulk storage is exactly what I want and I don't have buyer's remorse.
Thanks for the video ! Any idea when we may see 10TB to 12TB consumer drives from manufacturers like Samsung? I really don't understand why we don't have them now considering the size of the chips and boards inside a 2.5" SATA drive enclosure.
There is a lot of empty space in the 2.5" enclosures they use for Sata SSDs. The limitation on capacity must be intentional. Typical non-compete agreements between companies.
I guess never. The reason is that SATA is basically dead. Not really completely dead, but for some use cases, like high capacity SSDs, it's effectively dead. It's an increasingly uninteresting market segment for the manufacturers. We had SATA3 / 6G for a while now (15 years) and the SATA-IO, which develops the SATA standard, has already stated that there's no interest in developing SATA any further in terms of speed, like SATA 12G and 24G, which are speeds exclusive to SAS. SATA will remain as a cheap solution for less demanding users and workloads and the performance/advanced/enthusiast market is served by NVMe solutions. Apart from some niche exceptions, I think 8 TB SSDs are the biggest SATA SSD we will see freqently for the forseeable future, maybe ever.
Way cool, but I think I'm gonna have to stick with my plebian 16 2.5" sata drives in an icy dock enclosure. Good enough for me, but then I'm not a big business.
Three of these SSDs in RAID5 (or the ZFS equivalent) is bigger than my complete 12x 12TB NAS... That's insane! So... If you do a giveaway, I'll sign up, because power isn´t cheap (I'm in the EU) :P
i get 15372 terabytes in my rig using these drives, 7 x pci 5 split into 14 pci breakout boxes, 96 core amd, 2tb ram, dual 5090 when they arrive, 7 breakoutboxes with large fans, 12 x apex 21 m.2 nvme cards with pci flexi extenders for these drives, for 15373 terabyte storage. 6 x 8k screens. dual 3k watt psus. ultimate rig for data and creative usage. cpu and gpus are liquid cooled and a silent solution throughout. servers tend to be way to noisy. 28gb a second per card, 700gb a second memory ish so around 325 gb a second access
Hi, nice capacity, writing power consumption is a bit disappointing though, as it is almost as high as the three HDD together. I've a question : does the refreshing process (that avoids cells to lose data thus avoiding to have to read all data once in a while to rewrite it) count as a regular write or not ?
With QLC I worry about longevity, yes, but I worry more about data integrity. If using as a backup medium, which might not get touched for 6 months or a year, and almost surely will not have every byte re-written every time (i.e. differential or delta backups), I worry that my data might not come back exactly as intended (accurately distinguishing between that many levels of charge). But I love the idea of using a 2.5" housing to pack in more storage than is available on m.2 drives (yes, I'm talking consumer level here rather than enterprise). I wouldn't mind an MLC or TLC drive in a 2.5" form factor packing 16 TB.
That's why tech like ZFS is used! ZFS can routinely "scrub" the data and ensure that everything is still correctly readable, and anything that isn't gets "restored" to "good"
Ah, great! But based on my experience with different “datacenter” Intel SSDs I’ll better wait for Samsung PM-whatever of that size. 7.68-15.36 983’s and 9A3’s are unbeaten for years
with the newly reported 1000+ layers to be introduced by 2025/26 hopefully the prices will go down, I could use some 100 TB with some spare space to grow, that neat little 5 bay flash nas with 5*60TB would be nice.... btw what has happened with WORM flash storage? also hope the chinese will pull through and we have soon the new optical media that can put 125 TB on one bluray sized medium
what's the data integrity / reliability of these drives anyway? Internally it's going to be like 64 × 8 tbit chips in essentially raid0, hopefully with some kind of integrity checking... that's some crazy complex controller inside that drive
In my mind the gotcha is still most times a spinning disk will give errors and time outs before dieing, in my nearly 20 years I've only ever come across dead ssds. No warnings, just stopped working.
This is important to call out. Many loads are not the same and in high capacity workloads like storage SSDs are becoming more and more important. I think the bandwidth is also important to call out. With the highest capacity HDDs you get the capacity but good luck getting even 1 DWPD out of it. Not because of endurance but simply because it can't write that much data. It is too slow. Operations like resilvering an array of drives will take literally all day to complete where as with an SSD like this that work can be done in the matter of hours. These drives are about the capacity and not the speed but just because they are winning on capacity doesn't mean that they are losing on speed compared to HDDs. Compared to HDDs they really are the best of both world with the last remaining hurdle being $/GB but even there they are starting to pressure HDDs.
The thing I couldn't help but thinking the entire time watching the video is how painful it would be to back that thing up! Or WORSE, if it crashed and you lost 64 TB of data because of a faulty backup. 😮😮😮
Man, my office/homelab DIY NAS could be so much smaller, quieter and more energy efficient. I hope the price per TB comes down enough to just be a small premium vs conventional HDDs. I'd be all over that.
The price for ssd's, especially sata versions, needs to come down. A 2.5" drive typically has less than 1/5th it's internal space used. a sata 3 interface and the ability to work with more nand chips and perhaps more cache is all the controller needs. Being able to not implement 4pcie lanes and simplify i/o to host to just sata should make some cost reduction on the controller.
I see a lot of "up to" in the performance specs. Unless they're talking about something like heat output that's a non-starter for me, especially for a QLC at those prices. What does that even mean? 0 is "up to" 1000. They can try again when they'll guarantee specs using the term "at least".
The flashy marketing materials only shows "up to" in qlctlc comparation. Technology didnt improved in last few months this much to reinvent multilevel cells unfortunatelly. The drives are making those numbers thanks to really large amount of chips and smart use of contoller/s. Sorry to say this but its just jumping on higher numbers not an improvment of any kind... They just did more parallel writes, so drive will have worse latency after filling up vs tlc or mlc, and therefore making it worse for heavy workloads. they should just glue more tlc or reinvent mlc to be denser thats my opinion. Some would say its for storage, i will answear try do data recovery on those and better go hdd or even better buy tape xD
With the Flash tech used I wonder if you could negate the bad sides by using promo cache to help with the writes and endurance by loading say a MLC drive or even TLC? Like what I do with client builds that use HDDs I offer a free 128GB SSD and a pro license to that software so there HDD is snappier then stock.
Those hard drives standing upright making me nervous
Imagine being me and trying not to knock them over as I am waving my arms! Actually, drives are pretty heavy.
Why worry it's not LTT 😂
@@ServeTheHomeVideo i bet you just Grabed some old 1 TB disk as props
@@plaguedog32 hahahahahaha true that.
Who do you think he is Linus from LTT?
i wish theyd just make super large 3.5" SSDs ...i think they make a lot of sense for budget consumer market who needs low cost and capacity but where density is less of a worry
Bring back 5.25" drives so we can get several PB in a desktop
Too few vendors on the HDD side for major competition.
Bring back the Bigfoot brand, 5.25" drive when everything else was 3.5.
Even the 2.5 inch form factor is dying for consumer stuff. It's all M.2 these days.
@@shadow7037932 space would be the same after adapting. With smaller drives you get better reliability since you use more drives for the same storage (if you’re using some kind of RAID/Z ofc)
at work we use the precursor of this drive for backups. mostly bulk seq write with large blocks and they are fantastic for that, plus with the fast random read the restores are very spiffy.
Super feedback
@@ServeTheHomeVideo I thought all flash has a limited shelf life if it's not powered on regularly, any idea what it is on something like this? (as compared to tape or an hdd which has a much longer or maybe even infinite non powered on shelf life assuming no magnetic interference). tks
@@cctv4268 Spec sheet shown says 3 months. Tape can last decades if properly stored. Spinning hard drives, maybe a few years.
This 1 drive costs more than my entire gaming computer & homelab combined.
And in 3 years it'll be on ebay for $50
I haven’t _really_ added it up, but it may cost more than the total I’ve spent on computers since I built my first PC in 1985.
Ok, after adding in my head, probably not; it seems to be about $12k. But still!
At my work we have SSD PCI cards that cost $20k each. This type of hardware is for servers that are making you money.
datacenters get drives like this one for less than 1000$.
@@myne00 thats so fckn sad and so fckn true :D
A $5400 SSD needs a 2 year warranty. Maybe WALMART has one for it.
Sorry, but Walmart is out of stock. You did way better than me. I just found it for $7,000. Won't be serving my home any time soon.
I guessed $5k. Yay I won! 🎉🎉 Lol would a nice game library if you want more games than you'll ever be able to play. Could hold the last 1000 AAA games released.
my god (nontheistic); the skibiddy toilett is doing circles right now. with that ''cheap'' ssd
Actually $5400 or $7000 both seem like a good deal...
In the EU, it has a MINIMUM 2 year warranty.
since this is mandated by law
Capacity in storage becomes a metric for rack usage, that is, the number of server racks dedicated to WORM tier storage. This in turn becomes a metric for power usage, cooling requirements, and datacenters needed to fulfill client needs. Devices with specific use cases are more useful at scale than in "typical" workloads.
In our cloud hosting, we have noticed that customers only grow data slowly after their initial ingress. This means our storage is somewhere around 90% read and 10% write, where writes are both overwritten data and net-new data. This metric will likely increase as the number of customers increase, until it becomes asymptotic.
Good points
Well, databases don't grow much or at least not rapidly. That leaves document storage, which grows only slowly when one's got such high volume storage available and most documents would be pdf based, which is compressed. Where I could see it growing more rapidly is in specialty environments, like astronomy or a hospital diagnostic imaging department.
I'd hate to have to try to back an array of those to a hot site though!
Actually just holy shit. You can get nearly 2 petabytes of blazing fast storage in a 1U server with the E1.L variant. That is actually nuts
Yes.
@@ServeTheHomeVideo For only 140k USDollars. 🤣
The title should have been: "MASSIVE 61.44TB SSD Puts Puny Wallets to Shame"
Depends. This is a lot less expensive than building out using smaller drives to hit the same capacity
@@ServeTheHomeVideo Show your math. 16 x 4TB SSDs cost $3,200. For $800 more I can add two drives in RAID6 and have two spares for redundancy and HA.
@@unhandled12345 At $200 each those are consumer drives without PLP. You also would need to connect those drives, probably using a Broadcom Tri-Mode 9670W-16i or something like that so that is $1500 before you get the cables to connect that many 4TB drives. If you wanted to get more performance, you could connect all of the drives at PCIe Gen4 x4 but then you need 64 lanes just for your array assuming they are direct attach U.2 drive bays instead of the Broadcom RAID controller. Your reliability is considerably worse as well if you did a 16 drive RAID 0 because not only are all of the drives failure points, but all of the power and data path connectivity to each drive. That Broadcom controller does not have room for more drives, so you would need a PCIe switch NVMe chassis. You also cannot do more than 12 U.2 drives in 1U, so you are now double the rack space. 16 drives is a big number, and you have to add in the cost to attach drives.
@@ServeTheHomeVideo …really? RAID 0?? Who in their right mind would EVER use RAid 0 for data storage??
For speed tests, maybe.
@@ernestgalvan9037 That was the question a 16x 4TB RAID 0 array.
Finally, something small enough that it fits in a Cybertruck bed.
spitting out my coffee as I read your comment. FUNNY!
Must be high ranked on a list of products that don't match the pre-production hype. At least it wasn't vaporware. For what that's worth.
These sound like perfect game drive storage units.
no price on the HDDs and no price on the SSDs - thanks alot
If you ask about the price, you can't afford it! (me neither)
Mouser has the SSD for $7700, which is pretty good for that much SSD storage in a single module.
@@shadowtheimpure could you picture those in a SAN? Hell, even in a RAID with 4 + spare in a proper server or NAS?
@@spvillano I could, though definitely in a proper data center and not a homelab environment. I might just start saving up for one to add to my drive pool as tier 1 storage. Wouldn't be too difficult, I'd just need to get a U.2 carrier card. By the time I've saved up for it, it'll probably be less than half the price so that's a bonus.
i love how you casually mentioned the cybertruck xD this needs a video for its own
Jake Tivy looking at his server like “hmmm…”
Ha! I would not be surprised if he shoots me a note today. He has told me he is not a Cybertruck fan.
At 61.44 TB, 0.58x writes per day averages out to 412 MB/second or 3.3 Gbit/second.
With 65 PB written as max endurance, that works out to 1,824 days of continuous writing at 3.3 Gbit/second, or two days shy of 5 years.
I was just thinking how there hadn’t been an STH vid in awhile. Still reading the site but *love* Patrick’s enthusiasm.
Very fair. This video needed something, and the huge 2.5GbE switch round-up has been sucking time.
LTTs new video title: "We upgraded new new new new whonick" to 24 61.44TB drives
But dropped it on the way.
I think they had once 100Tb Kioxia nvme drive but in 3.5" enclosure
They are in Raid 0 and hold information critical to the company
I can’t stand watching their content honestly. It’s like mr beast for pc hardware lol
People still watch LTT?
I've had my eyes on these since they got announced, I want one but they're so expensive, but when space and weight are on your priority list this is what gets you excited
Lately I've been experiencing insane unreliability by enterprise grade SAS or NVMe SSD's, they are croaking like crazy, from completely dead within few weeks (dead flash controller) to gradually croaking with gradual loss of useable bits per flash chip. All under full warranty but we're losing them faster than we can replace them. Luckily spinning rust and old SAS drive racks are still there to handle the failovers, but this new stuff is getting out of hand.
which brand?
🐸
The energy at the start of this video startled me
5AM peak energy
I reduced speed to 75%. Now he sounds like he had a three martini lunch
I just dont understand the excitement at that price... the pricing kills it.
Now I am interested in a deeper dive of a comparison between HDD that is made for Write once read many like security cameras and the like. VS SSD with all sort of different tech from SLC to as high as the market currently offers. As you do bring up an interesting point regarding reliability of storage. As someone who is a paranoid as me I still rock the spinning rust for its tried and tested reliability or at least the fact I don't worry about the seemingly random death that SSD is known for. The spinning rust has enough bells that I get some time to replace the drives.
Also, loved the extreme testing of vibrations! Didn't expect a drive to die this quickly! Thank God that I don't have my drives in such intense condition. But I do know that just by having 2 drives sitting next to each other is a risk in itself let alone a NAS configuration of multiple of those rusty boys.
I suspect shingled technology is on it's way out, with Seagate having released their first HAMR drive recently while both WD and Toshiba have announced upcomming similar models.
The event horizon telescope guys would have loved SSDs like this.
I don't know. I have Hard Drive's still working with over 95,000+ hours (10 Years) on them. I also have several SSDs start failing in 35,000 hours. Overall, my SSD's live half as long or less. Something to consider I think... Thanks for putting these out. Please keep them coming :-)
one of these instead of a large NAS... I dreamt about it quite a bit. Finally getting to hear why it might be a bad idea.
I think there are a lot of people who do video editing and such that will think about 2-4 of these in a system. It costs a lot, but if you make a living off your system it might be worth it.
@@ServeTheHomeVideo I am technically employed as a video editor right now ... Maybe that's the mental justification I need.
I would want to store a lot of videos and photos too, mainly store. Which shouldn't cause a lot of wear. the language model cache will be a dedicated drive, as that isn't data that's important - just useful.
I have to lookup prices and also make some calculations on how long 31TB would last me. Since that sounds a bit more reasonable. PCIe lanes are the actual limitations for the workstation I dream of.
Oh yes, one of this instead a big Nas.. sure....so if the drive fails you will loose the data AND more than 5 thousand dollars 😅 nice idea 😂
I'm not worried about write endurance, more about long term data integrity and durability.
I'm looking to have a large capacity, but portable small form factor long term or archive storage solutions.
I have over 100TB of personal data, plus 100TB backup split across 40 x 5TB HDDs....so consolidating all of that into just 2~4 small portable storage devices is a god send.
Just get them. Then you have 3 back-ups. Store 100tb of the drives in a lock box at the bank and re write to them every couple years. The other 100tb at home for easy access. Makes sense if you have like 10 to 14 grand to burn. But don't trust the ssds for reliability and thee backup.
Like any storage medium you’ll have to rewrite these periodically, perhaps a bit more often than HDDs, but other than data rot, overall MTBF should be a lot higeher thanks to so many fewer devices.
(Dang though, that’s a *lot* of personal data. What is it if you don’t mind me asking? Big video files?)
quad layer blu ray still rocks... and lasts
Finally a drive which can take care of my totally-not-pron collection. Of course, I guess for its price one could hire a few real people to do stuff with for a long time
I would definitely get this 61TB beast... 10 years later! (I got lots of Intel S3700 for home usage since 2022)
I have an 8tb Kingston data center U.3 drive in my gaming/workstation pc. I’ve got it mounted to the exact same star tech adapter board too! It’s fantastic!
Mines 6.7 TB, same adapter board. It's the fastest drive in my system. Faster than my Optane drive.
The 4K random of the Optane is where that still blows everything else away
@@ServeTheHomeVideo Compiling a big project, the U.2 drive is noticeably faster to complete. Where would I see the difference in random IO performance?
@@ServeTheHomeVideo Longevity of the Optane drives on writes is also very good - there are better out there but not a lot of them.
Also I like dwpd as a metric but would love to see TBW or PBW as a more common metric along side it.
They convert trivially.
I actually really like Solidigm, the m.2 boot drive in my computer is a Solidigm 1TB SSD, so I would gladly buy this SATA SSD, if it wouldn't break my bank. This would be great for people looking to store lots of data though, like people who fly drones equipped with LiDAR or Hyperspec sensors.
Mobile data center...doing some war driving? Nice! Vanity plate should say 'PN TSTR'
Was actually capturing video
Costs more than my 2020 build. 3950X - 64GB ram - 5700XT - 2x2TB Nvme - 2x4TB SSD - 1x8 + 1x16 Internal HHD + 8x8TB external HDD. Would be nice to have 6 of these bad boys in a my daily even if I couldn't fill them. DOH the back up time lol. Oh well roll on the lotto jackpot then may be I could try!
When I first saw the size I thought uh-oh an Amazon scam lol.
Nope. Big vendor SSD
24 drives in a 2u or 12 in a 1u...but I only have 2 kidneys...
Well it looks like I've found the channel for me. SSD storage is so freaking satisfying.
You must lead a strange life.
@@unhandled12345 I indeed do. I'm incredibly broke yet use only SSDs for backup. They're just cool. Idk man do you have something that just interests you even though it really just kinda exists?
@@LindonSlaght SSDs are no a good backup medium. Use platters - they're cheaper, larger, last longer powered off, AND can be recovered if failed.
@@unhandled12345 I've had atleast 10 people tell me this, I respect your opinion but at this point I'm not changing.
@@LindonSlaght Such a strange response. I guess you like SSDs more than your data ;)
I needed 20tb exos SSD equivalent anyways because of the recent reports of exos drives failing I can be assured that there will be an alternate SSD solution like this one that I can swap my HDDs with
Right. This replaces multiple 20TB drives.
I find the drive details page interesting as these seem cleverly tuned to show how awesome the drive is.
Micron 7450 pro has a capacity of 15TB and an endurance of 28PBW. so the drive endurance is nearly 2000 full overwrites.
Solidigm has a capacity of 61TB and endurance of 65PBW, roughly about 1000 full overwrites.
In theory, adjusted for size, 4 microns 7450 outperform a single Solidigm in almost every way..? Granted, chance of a disk failure when you have 4 of them is climbing, but with 4 drives you can already think of a RAID with parity or redundancy. Not so much with a single drive…
You are right the Micron 7450 Pro is a faster drive. The point of this is that it is high capacity not that it is the fastest. In a server when you have say 12 front drive bays it is 180TB vs 720TB per U which is a huge difference
If you are worried about write endurance, you go with SLC or at least Pseudo-SLC drives.
Whoa whoa whoa, dial down the enthusiasm a notch. That intro always gave me a heart attack. But yeah, cool drive.
Just what happens when I wake up and record these at 5AM.
They do. ExaDrive has a 100 TB SSD which is the size of a 3.5" drive.
I keep Tweeting at SSD companies offering to certify their big ssd drives with my project mergerfs if they send me a few but they never seem to respond 😆
If I'm mathing right, 36 TB/day is roughly 415 MB/s. You could saturate SATA 24/7 and barely go over the rated endurance.
And SATA HDDs do not have that much endurance.
@@ServeTheHomeVideo I only recently started to have to care about drive endurance and longevity professionally, and I'm just so happy I don't have to care about HDDs. Especially since we are deploying to high vibration environments.
How come nobody mentions 3 month power off retention (3:35)? So if I put the ssd on a shelf, my data will be gone after 3 months? Scary 😮 On another note, I love your excitement, Patrick! It's one of the unique features of STH videos and it also gets me enthusiastic about the things you're presenting 😁 Keep up the good excitement!
Thanks. I think a big factor is that the idea that you pull an enterprise drive and do not power it on for 3 months is not one that many can relate to. These are meant to run 24x7 instead of being written to then shelved for months or years.
@@ServeTheHomeVideo Good point. Just something to keep in mind for homelabbers if they wanted to invest in such drive.
@@misku_ 👍 "3 month power off retention" . . .
I suppose such a time cutoff is where they felt comfortable from a nine 9's perspective. But it raises the thought that only a few weeks unpowered might put bits at risk. Perhaps refrigerate while unplugged to be more sure? Dunno.
Glad to see the $40,000 samsung exadrive finally has competition, lol
(That's the 100TB one they released a few years ago. It's a 3.5" though)
I don’t think ExaDrive is from Samsung…
only 6k - that's pocket change :p
This is also much less expensive and GA/ in mass production.
Nimbus did release a QLC 64TB version of the drive that was only $11K, which considering that was nearly 4 years ago isn't awful at $170/TB. This drive at $88/TB is competitive with current performance consumer drives.
Didn't Intel release the first 60TB SSD drive several years back?
I was in that exact situation recently, thinking about a workstation and a NAS with much higher than average storage requirements while still keeping a more or less compact layout. 4 TB M.2 drives aren't the solution, they're too expensive per TB and require to many PCIe lanes per TB as well, even if you think that you have an abundant amount of PCIe lanes, for example in a Threadripper platform. Turns out, you do not. 8 TB M.2 are a step in the right direction in terms of density, but they're few and far between and exponentially more expensive than 4 TB, so they were off the table as well.
U.2 was the next logical step to look at, the cheapest 15.36 TB solutions were much more attractive AND they also leave you with a clear upgrade path. 30.72 TB already existed and it was clear that 61.44 TB was coming when I started thinking about these systems. That means that if you start with 15.36 TB, then at least 4x is possible with changing the drives and without doing anything to the platform, it certainly makes it much more interesting. At the moment I'm testing a few PCIe adapter cards for mounting up to 4x U.2 directly to the card as well as one like you showed that uses cables to connect the drives. Bifurcation is obviously a given and this all looks very promising to enable very high density storage builds.
At around $1K each, the 8TB drives are more expensive on a $/TB basis than these.
This is exactly what I've been and wanting and waiting for forever. HDD's fail, constantly and regularly. It's nearly guaranteed. It's not "if" a HDD fails, it's "when". SSD's are simply more reliable. It's not that (high quality) SSD's don't fail, they just fail *less.* This is the answer to long term data archival - measured in decades. The future is awesome.
0:17 this is so so wrong seeing that on the tiny pci card 😅I love it
I suppose drives like these would be great for what basically boils down to ROM / WO-RM use like screaming services do.
It's most important aspect would be random read speeds to supply enough data to those watching/listening to all that is stored on it. And there a bottle neck would arrise as it holds more data than the random users can pull out at a single time.
Data retention is the key problem for QLC, not just endurance. I won't risk my 61TB data to store in one single drive.
BTW, for a 61TB SSD, it maybe more interesting to see this drive's over-provisioning & whether it's a DRAM or DRAM-less drive to be uncovered.
if you have a 61TB ssd you're running this 24/7 behind UPSes
In a server scenario it would be running in a raid configuration. Can guarantee it has DRAM if it is putting out 7000MB/s reads.
you need at least 2 of them to make raid 1..... or more for better. Plus backup.
I finished migrating my TrueNAS from 8x8TB HDDs to 12x7.68TB SAS3 HGST SSDs, and claimed my house "spinning rust free"... and now they feel old already. 61.44TB NVMe in a single drive! Almost the same capacity as my entire usable space with 12 disks.
That is the craziest part
When "like" is every noun, verb, adjective, adverb, prepositional phrase...
Still, I have to give props to a channel that puts a data center into a rolling urinal. *RESPECT!*
This is a replacement for tape. A way to store archive data yet have quick access. Backblaze will probably love these drives.
I guess mechanical drives don't like "drop kick me over the goal posts of life."
Storage _capacity_ sure. Longevity absolutely not. (which is the point of tape. even more so for archival)
@@jfbeam Back in the 80's, 90's, I remember Tape Operators had to exercise tape. Read and rewrite the data to a new tape. Also the tape layers would sometimes stick together.
@@CaveSkiSAR Don't get me started on the widespread usage of 3 levels of tape backup - daily, weekly, and monthly - ALL of which had to be refreshed every so often.
@@bricefleckenstein9666 The Pain, The Pain.
"I was Internet before Internet was cool."
@@CaveSkiSAR Actually, I was Internet before the Internet actually existed.
I never had access to ARPAnet, but I WAS active on UseNet for at least a decade before the Internet as such was created when Congress opened the DARPA-type network to commercial usage.
Can't wait to see these hit the market used at a reasonable price
"At what price?"
"Everything."
At last we can do Unreal development without fear of running out of space.
This is great news and can’t wait for it to be available to average enthusiasts.
I might be alone on this, but I wish the industry would churn out some modern 3.5 SATA SSDs. I own a couple of older enterprise disk shelves, and being able to populate those sleds with massive SSDs would be really cool. Sure I could put some 2.5 inch drives in them, but that feels like a waste of space 😂
I remember when Nimbus released the 100TB 3.5" SATA SSD for $40,000.
Yea this is more like mass production, NVMe PCIe Gen4, and $5.5K.
@@ServeTheHomeVideo For my use, it would be cheaper to buy a bunch of lower capacity ones as they would mostly be used for local video streaming. $5.5K is a bit out of my price range granted I spent $2000 on used Threadripper Pro 3955WX and Asus Pro WS WRX80E-SAGE SE WIFI.
even with a decade or two of hording SSDs and hard drives im not even close to what this drive has as storage alone. imagine having that in your notebook.
This is probably a bit too much power and too large for most notebooks, but yes, that is where this is heading.
Hopefully someday
The fact that most notebooks still fall under the 1TB mark is beyond pathetic. Even if they came with 8TB I’d still see that somewhat meh.
@@avegaiiiPeople are being moved to the cloud and webapps, there’s less storage needed.
@@LtdJorge cloud storage is even more expensive
I pay around $300 a month to have a dedicated remote server with around 100TB.
Normal cloud storage is ok for sharing a few files but a lot of those services cap out at a measly 4TB.
@@avegaiii THUMBS UP; you're my man!!!
Cool drive and video!
Thanks!
Please stop, I can only get so hard and I just boought my 24 tb drives last year
Ha!
The Hynix branding effort I think was successful. No one ever talked about hynix ssds but now everyone is talking about solidigm. I forgot Hynix acquired Intels ssd division so as STH said these are more from thr old intel side
Solidigm is more from the Intel SSD side. SK Hynix SSDs are still being developed.
@@ServeTheHomeVideo you are right I forgot Hynix acquired Intels ssd division.
yea, long term endurance info would be awesome
wonder if backblaze will take up that challenge 🤔
As I recall, Backblaze starting giving endurance info on the SSDs they use for boot drives about a year ago.
The would NOT be interested in this drive for their data drives, TOO EXPENSIVE.
They MIGHT start looking at SSDs for data drives in a decade or two, if recent price progression continues and SSD finally passes HD on amount stored per dollar.
As a nerd's nerd, my first thought when I heard it was a 2.5 was, "and it'll fit in my frickin laptop...with LA-SERS, Mr. Bigelsworth!". Need a laptop with a SAS controller, lol, but a nerd can dream.
It would be cool to see videos on the feasibility of low-redundancy SSD based NAS configurations: I could 100% justify the cost of swapping 6x 8TB HDDs with 1x ~30TB SSD in my home NAS, if it means I ditch the noisy 4U in my office to a silent NUC (networked via 10GbE).
Specifically, problem I see is (lack of) ZFS error correction. If the whole drive fails I can have an on-site copy in the garage (which does not need 10GbE for replication). I kind of assume I benefit 2:1 ratio in a RAID-Z2 configuration for ZFS's error correction when one drive's read fails the parity check (bit rot, a loose neutron, whatever). This is a home NAS, data needs to sit around for decades where 99.99% of the reads are scheduled scrubs, and I don't have a good sense for how much this actually happens with SSDs. I've definitely lost family photos to bit rot over the last 25+ years.
Can't afford those, but I did pick up 4 new old stock of intel D5-P4326 15.36TB drives. QLC, yes. . but I love them as bulk storage is exactly what I want and I don't have buyer's remorse.
Thanks for the video ! Any idea when we may see 10TB to 12TB consumer drives from manufacturers like Samsung? I really don't understand why we don't have them now considering the size of the chips and boards inside a 2.5" SATA drive enclosure.
There is a lot of empty space in the 2.5" enclosures they use for Sata SSDs. The limitation on capacity must be intentional. Typical non-compete agreements between companies.
I guess never. The reason is that SATA is basically dead. Not really completely dead, but for some use cases, like high capacity SSDs, it's effectively dead. It's an increasingly uninteresting market segment for the manufacturers. We had SATA3 / 6G for a while now (15 years) and the SATA-IO, which develops the SATA standard, has already stated that there's no interest in developing SATA any further in terms of speed, like SATA 12G and 24G, which are speeds exclusive to SAS. SATA will remain as a cheap solution for less demanding users and workloads and the performance/advanced/enthusiast market is served by NVMe solutions. Apart from some niche exceptions, I think 8 TB SSDs are the biggest SATA SSD we will see freqently for the forseeable future, maybe ever.
Way cool, but I think I'm gonna have to stick with my plebian 16 2.5" sata drives in an icy dock enclosure. Good enough for me, but then I'm not a big business.
Three of these SSDs in RAID5 (or the ZFS equivalent) is bigger than my complete 12x 12TB NAS... That's insane!
So... If you do a giveaway, I'll sign up, because power isn´t cheap (I'm in the EU) :P
i get 15372 terabytes in my rig using these drives, 7 x pci 5 split into 14 pci breakout boxes, 96 core amd, 2tb ram, dual 5090 when they arrive, 7 breakoutboxes with large fans, 12 x apex 21 m.2 nvme cards with pci flexi extenders for these drives, for 15373 terabyte storage. 6 x 8k screens. dual 3k watt psus. ultimate rig for data and creative usage. cpu and gpus are liquid cooled and a silent solution throughout. servers tend to be way to noisy. 28gb a second per card, 700gb a second memory ish so around 325 gb a second access
I remember installing the ontrack drive overlay because my bios wouldn't support anything more than 540MB and thinking 1GB I'll never outgrow that!
Hi, nice capacity, writing power consumption is a bit disappointing though, as it is almost as high as the three HDD together.
I've a question : does the refreshing process (that avoids cells to lose data thus avoiding to have to read all data once in a while to rewrite it) count as a regular write or not ?
With QLC I worry about longevity, yes, but I worry more about data integrity. If using as a backup medium, which might not get touched for 6 months or a year, and almost surely will not have every byte re-written every time (i.e. differential or delta backups), I worry that my data might not come back exactly as intended (accurately distinguishing between that many levels of charge). But I love the idea of using a 2.5" housing to pack in more storage than is available on m.2 drives (yes, I'm talking consumer level here rather than enterprise). I wouldn't mind an MLC or TLC drive in a 2.5" form factor packing 16 TB.
That's why tech like ZFS is used! ZFS can routinely "scrub" the data and ensure that everything is still correctly readable, and anything that isn't gets "restored" to "good"
put this thing on external enclosure and it will blown away your friend xD
Is there a 122.88TB SSD in the market? What are next size tiers above this 61.44TB? 👍 Great video
$10,000 + dollars is expensive. At least the product is given a five year warranty.
More like $5500-6500 depending on the shop. Not cheap, but if you compare it to buying 7.68TB enterprise drives not bad
Ah, great! But based on my experience with different “datacenter” Intel SSDs I’ll better wait for Samsung PM-whatever of that size. 7.68-15.36 983’s and 9A3’s are unbeaten for years
The military is loading in their pants right now, over the thought of putting those drives in a spy plane.
with the newly reported 1000+ layers to be introduced by 2025/26 hopefully the prices will go down, I could use some 100 TB with some spare space to grow, that neat little 5 bay flash nas with 5*60TB would be nice.... btw what has happened with WORM flash storage? also hope the chinese will pull through and we have soon the new optical media that can put 125 TB on one bluray sized medium
We do write about a Petabyte per day and overwrite it again after about 3-4 days.
24/7
we read all our date once before we delete and overwrite it.
what's the data integrity / reliability of these drives anyway? Internally it's going to be like 64 × 8 tbit chips in essentially raid0, hopefully with some kind of integrity checking... that's some crazy complex controller inside that drive
In my mind the gotcha is still most times a spinning disk will give errors and time outs before dieing, in my nearly 20 years I've only ever come across dead ssds. No warnings, just stopped working.
You can have SBCs with decent compute, in a footprint this size. Ones with oculink and dual 2.5G Ethernet. A pocket server you could game & render on.
This is important to call out. Many loads are not the same and in high capacity workloads like storage SSDs are becoming more and more important. I think the bandwidth is also important to call out. With the highest capacity HDDs you get the capacity but good luck getting even 1 DWPD out of it. Not because of endurance but simply because it can't write that much data. It is too slow. Operations like resilvering an array of drives will take literally all day to complete where as with an SSD like this that work can be done in the matter of hours. These drives are about the capacity and not the speed but just because they are winning on capacity doesn't mean that they are losing on speed compared to HDDs. Compared to HDDs they really are the best of both world with the last remaining hurdle being $/GB but even there they are starting to pressure HDDs.
Crazy that my first hard drive was 20 megabytes on an Amiga - yes it did feel really small at the time too.
Get yourself some LPUs and make that cyber truck a mobile inference unit. I want my chat bot to live in the parking lot
13:24 Seems like Hard cards are making a comeback!
I need just 2 of them...😁
I think there are a lot of folks who would do well with 2
I remember when 9 GB SCSI drive was $3500.
You are always crazy excited...
he is always reviewing crazy new tech
Born that way.
I want a couple for my RUclips video creation computer!
The thing I couldn't help but thinking the entire time watching the video is how painful it would be to back that thing up! Or WORSE, if it crashed and you lost 64 TB of data because of a faulty backup. 😮😮😮
I think on a hundred of 40GiB files BackUp each day in BTRFS raid1, if price would not be expensive, it would look really attractive.
Thats a whole lotta stuff I prolly shouldn’t save
Man, my office/homelab DIY NAS could be so much smaller, quieter and more energy efficient. I hope the price per TB comes down enough to just be a small premium vs conventional HDDs. I'd be all over that.
The price for ssd's, especially sata versions, needs to come down. A 2.5" drive typically has less than 1/5th it's internal space used. a sata 3 interface and the ability to work with more nand chips and perhaps more cache is all the controller needs. Being able to not implement 4pcie lanes and simplify i/o to host to just sata should make some cost reduction on the controller.
You actually bought a Cybertruck?! LOL
Heck yes. I had a 2015 Model S before this and wanted that experience but in pick up to haul stuff and power servers on the road.
@@ServeTheHomeVideo The Cybertruck has enough of a bed to hold a server?
I see a lot of "up to" in the performance specs. Unless they're talking about something like heat output that's a non-starter for me, especially for a QLC at those prices. What does that even mean? 0 is "up to" 1000. They can try again when they'll guarantee specs using the term "at least".
The flashy marketing materials only shows "up to" in qlctlc comparation. Technology didnt improved in last few months this much to reinvent multilevel cells unfortunatelly. The drives are making those numbers thanks to really large amount of chips and smart use of contoller/s. Sorry to say this but its just jumping on higher numbers not an improvment of any kind... They just did more parallel writes, so drive will have worse latency after filling up vs tlc or mlc, and therefore making it worse for heavy workloads. they should just glue more tlc or reinvent mlc to be denser thats my opinion. Some would say its for storage, i will answear try do data recovery on those and better go hdd or even better buy tape xD
This is more for bulk storage, and even "worse latency after filling" is still going to be 10x or 100x better than mechanical drives anyway
With the Flash tech used I wonder if you could negate the bad sides by using promo cache to help with the writes and endurance by loading say a MLC drive or even TLC?
Like what I do with client builds that use HDDs I offer a free 128GB SSD and a pro license to that software so there HDD is snappier then stock.
Remember HDD company’s don’t make 3.5 ssd because they would be affordable storage not the expensive stuff they sell you now
Nice description video, I found your mobile data centre a bit of fun, God bless.
Thanks. A fun little Cybertruck project
This could almost store my entire plex library