One of my favorite things in LTT videos is Alex being worried about jank things that other people do even though he is the jank master himself. He will be worried about other people doing things and do something 10 times worse a minute later. And I am here for it.
"Hot plug" refers only to the switch chip itself, m.2 doesn't allow for it. The mechanical interface still has to be designed to ground the SSD before applying power and limit the inrush current.
I remember seeing something similar to this way back in the 80's. They were essentially the first SSDs. It was a board that you'd install a bunch of RAM on. Back then RAM didn't come on DIMMs. It was a bunch of socketed IC chips that looked like EPROM. Then the board would be installed in an ISA slot (it was probably EISA, don't remember). There were also no drivers. The board would present itself to the BIOS as a native storage device, kinda like how ST-506 and ATA did. The BIOS would facilitate communication between the OS and the "drive". It was a lot like MFM and RLL drives that had their own expansion cards. This was all pre-Windows and GUI OSes.
@Edgar Friendly I've just ordered one of those to finish off my insane Windows XP build. I don't know how practical it's going to be, but I just had to get one 😂
@@josephkarl2061 I used a 4GB Ram disk way back when. The only impractical thing was copying the program over to the ram disk before running and then remembering to copy back when done. I had it down to a script. These days I'd consider some sort of filesync program (syncthing maybe?) that handles your files to copy back/forth in more real time so you dont even have to remember the second step.
@Z C The plan for the card is to have XP installed on it so Windows will run faster. The Ramdisk is on a PCI card, and even though I've been into computers for a long time now, I never knew the PCI slot will still get power even after shut-down. It does have a backup battery, but we don't get too many power cuts where I am, so I should be good. I'm looking forward to seeing what a difference it makes.
@@Cyhawkx I almost bought one of those. I remember it used a battery to keep the storage alive when shut down. It could basically fit Windows XP and a few programs lol
I can see the utility of this. When I was still at uni (back when the fastest way to transfer data was TAs with USB keys), I worked with a professor on a project where he was using datasets that were about 32GB each, and he was having to go through about 100 to map them out. The software , which he made, required it to be loaded into memory, chucked out and recalled later. So this was on the order of 1 test per night on a (at the time) top of the range dual Xeon system. And I successfully got it up to 2 tests per night, hold your applause. Something like this would have been a Godsend. I mean, I'd have to do more work writing code, so I am glad it wasn't a thing.
Pros of BIOS RAID: - More "natural" path to installing your OS on a RAID volume. - Presents the volume to the OS as a single drive (mostly only significant because of the previous point). Cons of BIOS RAID: - In almost every case, you need drivers to use the volume in your OS. This is a common pain point during OS install. - For consumer boards, it's still just a software RAID (see impact in the next 2 points). - There are almost no performance benefits in regards to CPU usage (tested this many times). - Data throughput isn't any faster than using an OS software RAID (tested this many times). - If you ever move your drive(s) to a different system (especially a different brand) that doesn't use the same BIOS RAID, you can't access the data. OS level RAIDs will always be recognized by the OS (unless you toasted your drives somehow). Do not confuse this for a list comparing hardware vs software RAID because consumer grade BIOS RAID is not hardware RAID.
That last one can really screw you. "If you ever upgrade your hardware..." 😆 Almost all hardware like this will always get upgraded or break in some way. Doing BIOS RAID is very not smart. Also, if the RAID algorithm is ever slightly bug fixed or improved, you won't get those improvements, because BIOS upgrades stop being released after a few years.
@@timramich for a general consumer, yes it's dead. for enterprise use, it could be beneficial since hardware RAID with write cache usually have battery backup
We run enterprise database without hardware raid. Unless 50% of drives out of -100 in all nodes go Kaput immediately, we wouldn’t even lose data, and we can always bring up from backup and re-ingest the missing data during the downtime to get back up and running.
I love videos with Alex and Linus. Linus loves to do things the janky way, and Alex has an engineering background, so Linus has a hope that Alec will do things the correct way, but then when Alex does thing the janky way like Linus isn't happy, but then things work out, and he's happy again.
For the VFX Editor (this may be a simplistic view) things like Baselight X where uncompressed video with native 8K raster EXRs are used outside of a Proxy workflow for finishing, the sheer size of uncompressed EXRs combined with high frame rate requests now, means your bottle neck is storage speed. There are bespoke Linux appliances that are used for review and feedback for workflows in VFX where high-speed RAID Cards are used for caching these frames. This would be ideal for this purpose
Sabrent really going at it lately first with one of the best options for steam deck now searching out large silly projects to sponsor just to show how much progress they as a company has made is just insane. I swear haven't even heard of them till last year
From what I can tell (and some of this is speculation), they started out just selling just cheap generic adapters and usb hub type things, not being a player in the storage market at all, and then started selling some minimal effort generic reference design SSDs... Only, they chose the controller/reference design well so they were pretty decent SSDs at good prices, which managed to catapult them into the limelight, giving them the revenue to put some actual in-house design effort into the things, and now they're a major player.
Would love to see this thing loaded purely with Intel optane drives, i.e. 118gb p1600x ngff 2280 drives. It would only be ~2TB, of data, but imagine the iops.
Linus is doing this for more than a decade, you would think his enthusiasm will be gone by now but no, it increased by a ton. This is why LTT is so cool!
@@Sarcasshole yep I was gonna say the same thing. If your in the field that requires messing with lots of different set ups you’ll never be bored. It’s quite amazing how much stuff changes per year.
Why would u lose interest about something u are passionate about? I've been playing pc since the 90s, still gaming today, should I stop? I like women since I was a child, so I should change to men now as an adult? What kind of logic is that? I was passionate in science as a child, I should stop liking science now? By that logic a doctor loving their job should change their career to become a farmer, because they lost enthusiasm.
I'm happy this has come up as a topic of conversation. I've been wanting someone to test SSD speed to say yesteryear RAM versions like DDR3 DDR2 etc. to see if SSDs using swap files are at parity with RAM
@@SpruceMoose-iv8un you're not thinking big enough: you could open a game, unplug your computer, and then turn it back on and keep playing where you left off. Volatile memory holds us back from a lot of stuff.
Access times are about 10µs, whereas RAM is about 10ns, so about 1000 times slower. RAID improves data transfer rates but doesn't do anything for access time.
it would be cool if someone made some hardware that use lots of GDDR6X ram-modules as storage, and had some sort of batterybackup when the computer is off, to keep the data. (and then running them in Raid :D
This is perfect for data centers! I would love something like this for computer vision models! But I am sure we need motherboards with a lot of pcie lanes, like on the thread ripper.
A lot easier with u.2 or EX.x in the datacenter for hot swap and cooling. And sapphire rapids/EPYC have a lot more PCIE lanes, I.E. it already exists. Not to take away from how cool this card is mind you.
@@nadtz Yeah, this card is rather niche. It strikes me as something you'd use when you needed the space but couldn't use a full rack mount for whatever reason.
@@DFX2KX The other advantage of this card is that it takes completely standard PCIe drives, so you can buy them off the shelf, rather than having to pay enterprise prices for U.2 etc. Or I suppose if you couldn't get the budget to upgrade an older server, but it had a spare PCIe slot and you wanted to really spice up it's storage...
I could totally see this device being used in massive weather simulations where you need to store the values of the atmospheric conditions within individual blocks of data and need the fastest access possible to that data. Being able to store the entirety of the information contained within a storm on a single device would prove invaluable to meteorologists, especially Dr. Orr in Minnesota who's been simulating tornadoes in his supercomputer.
I'm in love with this thing because of NAS storage stuff. Sure, this card is really expensive and might be overkill but having 21 SSDs gives you a lot in terms of fault tolerance (RAID5 or RAID6) while also consuming less space. I'm excited to see where this is going!
It's very cool and I would love to have one but it would be cheaper to use 4 way passive cards in Xeon W or Epyc (including an upgrade to Xeon W - the card alone is 3 grand) if you only wanted one card worth of SSDs, it would also give you better cooling since those cards have decent heatsinks
You can even get 4tb crucial nvme drives at best buy for $200 right now, so i could totally see this being in a consumer media server. If a drive fails itd take like an hour to rebuild from parity instead of like a week with hdd
@mtrebtreboschi5722 still, for the price of one 4tb SSD, I could get roughly two 4TB HDDs or one 8TB HDD, which, in most cases, is more than enough. But yes, you're right: with the trend being that drives get bigger, faster, and cheaper per TB generally, it'll most likely be a thing of the near future.
@@playeronthebeat yeah hdds are about at least 3x cheaper per terabyte than any high capacity ssd, seagate exos drives often go on sale for $14 per tb. SSDs have the advantage in size, power consumption, rebuild time, and read-only longevity, which will all mostly be negligible for most people, but I guess if somebody has the money, this is the way to do it
I was actually expecting something jank when Alex showed up, but just 21 SSDs on a PCIe port surprisingly tame. With that said, I wonder how much of the performance would be retained if this SSD was used on a PS5.
the insane part is how much of a "great fit" and "bargain price" this could be considered for high-end enterprise and workstation systems working on big datasets. This density of performance and storage without "custom" drives is unrivaled
Machine learning is definitely going to be a big use case. Even sample datasets we use, given for educational purposes and are not meant to be very challenging , are easily 500GB in size sometimes. And they relatively small datasets since they really do not have that many different variables. Now imagine having to handle literally petabytes of data.
You know this just makes sense having cards like this with chips that can handle all of this. I can not imagine how much this could help once pcie 5 really starts to come around in the data center. If they come up with a card to be able to double the amount of drives using pcie 3 with the same card effectively that would keep more out of landfills long term. Love this techy
You are much more likely to see ex.x or u.2 in datacenters. Imagine how much fun having to swap one of these drives out would be compared to walking over and replacing the drive sled with the blinking light.
@@nadtz That's just an engineering problem though, I could imagine seeing a drive sled that contains 4 mini sleds (I name them Slugs™️) each with their own sub blinky blink.
The X21 is more than just a one-trick pony. The M.2 slot supports a number of other devices. For instance, you could use wireless cards with M.2 A+E converter boards to set each wireless card to a specific wireless channel for spectrum monitoring. There’s even an AI angle here. A company called Axelera makes an M.2 AI Edge accelerator module that could be used by the Apex card at some point as well.
@@JordansTechJunk Meanwhile, here I was just thinking of using 21 Asus pce-AC88 wireless cards to pci m.2 adapters and running all 21 wireless cards in parallel for a grand total of a theoretical internet speeds of 44.1 Gb/s upload and download. Do you think that's too much RF radiation? 😁
@@luminatrixfanfiction I who know nothing really, could speculate about interference making it not work. Not good enough anyway. But then again, with the right hardware and software that may not be a thing to worry about, I am simply speculating.
its crazy to think just a few years ago Sabrent was the "try it if you want" brand I bought a 240gb drive probably 5 years ago and I was skeptical but damn they are on my list of good brands for M.2s
@@isaacbejjani5116 nah. hynix, micron and intel are the best at least from my opinion. idk about their consumer grade SSDs but some of Samsung's enterprise SSDs are just.. problematic. early PM863, PM883, PM1733 seems having issues with its firmware
Man, I remember back in the day running two of the smaller Raptor drives (32gb?) in Raid 0 and just loving life. I may not have had the best graphics card or processor, but I loaded into BF2 faster than any of my friends for a while. What a time to be alive.
Sabrent just asking for a shoutout and casually sending that many 8TB SSD over is a big W move! They truly support Linus craziness and we all benefit from it! :D
I could see this working well for engineering simulations (FEA, CFD) that are effectively limited by the amount of ram that you have. Before something like this, maxing out the ram and letting the solver use the hard drive would increase solve time by something like 10x.
But you would most likely use DCPMM for that because it has much lower latency. You could probably use this for some kind of custom swap in some cases, but at some point there's diminishing returns. The advantage of this card is that it's compact. There are systems that have lots and lots of u.2 connections and can support far greater capacity and speed.
The newest simulation machines where I work are for chip cooler simulations. They use CXL RAM as a layer between the SSDs and RAM. We also have sapphire Rapids HBM in dual-socket for it as well.
This would still be slower, 25Gb/s on ram is per channel, if you are running simulations that intense I would think you would have a very dece amount of ram and be running it in 8 channel. This is cheaper for sure but I don’t think the OS is designed to use swap like that, I think it still goes through ram first and constantly dumps back and forth from ram to disk increasing your CPU usage drastically. This would however be very useful in AI training, the datasets are massive and gpus nowadays especially if you have like 4 A6000s could eat up quite a few IOPs. I would like to see a water cooled version of this, maybe they will partner up with an ssd manufacturer and offer that later as a factory assembled package.
@@radugrigoras: There's nothing particularly special about this. It sacrifices speed for compactness is all. So it's great for a system with few PCIe lanes or systems that need to be small. There are very few situations where this is something you'd even care to have.
I haven’t done this kind of work since 2013 (finite difference simulations on a 192gb, dual 8 core Xeon machine) so I’m guessing things have changed and gotten better in that time span.
Just imagine how insane the amount of storage you'd have with multiples of these! Its crazy that with only a couple of these and you're already at a petabyte, and in such a tiny amount of space. Crazy.
Those chips are used quite a bit in NAS and storage backups in my enterprise environment. Dell Avamar and Isilon units will connect to switch fabrics with those chips. Very fast stuff!
I've been looking for some solutions for backing up my personal files and media collection. Can you recommend any resources that will help me do that? I have older stuff on old nearly-dead HDDs that have been removed from their systems, but that's clearly not a comfortable, current, or reliable storage method
@@Thalanox why not just build a NAS using TrueNAS or just ZFS as a base. It's very well documented, easy and can be run on a old system. Just grab a bunch of used certified drives, and a old PC and swap the power supply for something decent and done pretty much. Set up a RAID array and put the files on it and call it a day. Drive fails? you're still covered
If you asked me 15 years ago where storage speeds could be I wouldn't have guessed this fast... It's hard for me to be super excited because I feel like the consumer application isn't really there... but the use case in cloud computing will be huge and we'll see it's affects in our services. It's interesting how obfuscated this technology is for general consumers even though we'll all see benefits.
I remember being impressed when early Sata 2 SSD's were breaking 200MB/sec read speeds and people would Raid 0 them for 400+MB/sec I'm still super impressed at my Gen 3 NVME 3,500MB/sec speeds... how far tech has come
Server motherboards already have a fak ton of ssd slots built-in. I built 2 servers like that 7 years ago with 10 Intel sata ssd (consumer grade) for my job. I don't remember the exact performance (probably 10 x sata speed) but it was insane for the relatively low cost. We put 40GBps network cards and a 40GBps switch between them and built one storage array spanning the two servers. It was a proof of concept for a cheap high availability hyper-v cluster. Good times.
Yeah we need this in consumer hands. FFS I still have 2 mechanical drives in my computer and like 5+ HDD for storage because they have big capacities and cheap... even if way slower.
12:25 55°C is apparently the sweet spot for nand flash, it doesn't like being colder or hotter Colder = increased data retention, write wear Hotter = increased hardware degradation
@@pandozmc I think I first read this on a EKWB site but now I can't find it. After a bit of googling I found this anandtech article from 2015 back when SSD data retention periods was becoming a concern. I should note that my memory isn't quite accurate, so if you do look this up feel free to ignore what I said in previous comment. here's what anandtech says : _The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data)._ _For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate._
I could probaly see a card like that for bio-informatics use case. Wholegenome sequencing datasets can be multiple TB of data, a card like that would 100% speed up analysis of such masive amount of data
This would be great for a Virtual Computer center - I’m biased towards education - having Lab VMs available on demand across a LAN would be fantastic. I’m only part way through watching as I post this, but I would be very interested in IOPS on a new heavy access load. Aaaaand… you didn’t let me down! You did the IOPS. Writes in a RAID will always cause a performance hit (especially because you won’t be able to use single parity bit and will incur extra cost on >8TB - which this definitely is!), but if I was using this it would be for VMs and Data with a heavier READ profile and it would be pretty cool.
16:15 if i remember correctly, when task manager says a disk is at 100%, that just means it was being actively written to 100% of the time since the last tick that it measured.
@''/ad cached.... you mean stored on a massive bank of drives? Beyond that, there's still computing in order to mesh data together to create something new... this is where quick access storage helps. Being able to access multiple pieces of data quickly means you're only limited by computing resources, not storage resources which tends to be an issue.
Over 10 years ago, before discovering Netflix back when it was good, I would be asked by my family to, find, and host entire seasons of their favorite shows as well as some of their favorite movies. I used to spend ages burning custom DVDs, but we were in the future with a 1080 HDTV and our Xbox 360 could connect to my PC using windows media center turning it into an HD streaming home server on the side. This meant a lot of time downloading overnight, and my 1TB-ish hard drive with more storage than I thought I would ever need was starting to actually visibly fill up, as I also had many multi-gig games as well as rips of the disc images for easily swapping into an emulated drive without swapping/scratching physical media, meaning my physical games needed space for the install and ISO, as well as gigabytes of music in my "My Little Pony Fan Music/Remixes" folder alone. Basically it was a well-loved PC that I volunteered my extra space to turn into a windows media center HD video streaming server for my family on the side. Now Netflix doesn't have all of the good shows anymore, and it is so expensive and inconvenient to be able to stream all of the shows you like as you juggle tons of account credentials, and it is getting harder if not impossible to have an ad-free experience no matter how much money you throw at them. So what if I wanted the entire library of every good show and movie stored locally in HD, or even 4K as we just upgraded a few months back? I don't want TV myself but I doubt the streaming services or our ISP are letting us stream much if any 4K content despite us having the fastest internet package in our area and I purchased a 4k streaming capable router last time we needed one so we could be 4K ready when the time came. Well, acquiring thousands of hours of TV and movies in the best quality available is its own beast, but should I ever win the lottery at least I know there's a product that can theoretically work with a home machine, with a modular nature so extra storage can be added as needed instead of having to buy a larger drive and copy the files from the old one, and also doesn't require that you buy $30k+ worth of SSDs up front and just add more as needed. Basically, if you're willing to spend your time ripping a massive collection of 4K blu-rays and whatever else it takes to get your hands on every episode of every show everyone in your household loves, and the movies too, in the highest quality available, you can make your own media server with no ads or switching between 3 different services to watch every Star Trek series and movie. No "we can't watch this because my parents in the next state are on the account right now" It would be a huge money and time investment to get the card, drives, and all of the media, but it is a lot more technically feasible now to have your own HD/4K media center just by adding an extra card to a regular PC.
That commercial chiller that you have for your lab should be used with this on a milled cooling block just for this application. See what you could do as far demos go. You might be able to see those numbers climb higher.
just to be clear, bios raid limitation does not matter for anyone in professional space. Raid is basically broken for anything serious - it can detect if drive is dead and then migrate to spare one (in case of something like raid5) but the issue is when drive is not yet dead, it's failing - in that case it might corrupt your data. that's why for any serious data storage you want to use something like ZFS so it's able to check checksums of your data and detect that something was corrupted on drive and fix it on the fly what would be also useful use case that such amount of iops? ZFS deduplication, that required to add another layer before accessing data -> you want to get data block X, then you need to read from deduplication map, and get read addres for data Y and then access that data. This basically halves amount of IOPS you can do (assuming no cache hits in memory) that's why it's not used that often. In this configuration you could get a much more effective storage when expecting duplicated data
Well, Linus guys fall for this shit some years ago... search on channel "Our data is GONE... Again - Petabyte Project Recovery" They Rely solely on ZFS to handle disks and they lose.
I love how they included that clip of shocking hardware to test if it can be killed while Linus was holding it with his thumb on the slot contacts. You just know someone had commented how he was going to zap the board. Edit: oh man, this just reminded me of something I had back in the IBM XT days. Back then I had a 20 mb hard drive built into an ISA card, aka a Hard Card.
This was a year ago and Linus was right. These cards did catch on a little, and they work. I put another brands PCIe 4.0 card in an older computer, stuck 2 M.2 drives on it, and it works great. I cannot tell they're on a PCIe board and not on the mobo. Worked great!
Linus: "Under normal circumstances, you wouldn't do something dumb like configure 21 drives in a RAID0" Also Linus: Heh heh, you wanna do a RAID0? This product is bananas, I'm gonna have to watch this vid a few times more.
Having two of those cards, each RAID-0 running as a RAID-1 would be pretty much perfect setup for a big database server. Combine those cards with say 1 TB of RAM and you can execute huge queries very rapidly against a 100 TB database.
@@exorr81 That's true. I explained that as a combination of RAID-0 and RAID-1 for two reasons: (1) RAID-0 and RAID-1 are easier to understand for most people than yet another RAID mode. (2) If you configure such setup as RAID-10 directly, changes are higher that you mess in a way that causes the whole RAID to be offline if one of those cards fails. (That is, you configure stripes and mirrors incorrectly over all 42 devices.)
I was going to get a 2nd Synology and put 8TB SSDs into it. But it looks like m.2 storage is being taken seriously as an option for RAID. I'd much rather go with NVMe if it can be a comparable price.
The primary use for these PCIe switches are actually in servers - when there's lots of SSDs in a server backplane they're sometimes connected directly, but often there's one or more big PCI-e switches driving the slots. And each switch will have a 16x or 32x (yes, this is a thing even if the standard only lists up to 16x) PCIe 4.0 or 5.0 uplink. This is the real reason why PCIe link speeds are again trending upwards quickly after a long hiatus (between 3.0 and 4.0), there's finally a user (servers) that are willing to pay for development (there's not enough entuast to pay for it by orders of magnitude). And I assume the reason the card doesn't have any fans is because they expect it to end up in servers where there's a large amount of airflow provided by the server itself (sometimes capable of cooling 400W GPU or AI accelerator with just the servers airflow, no fan on the card. It's also unfortunately why PCIe switches disappeared from consumer motherboards (many early SLI motherboards had them), the PCIe switch vendors jacked up the prices by 10x because most were sold to server vendors and they were willing to pay. "Better" (IE expensive) server backplane often accept PCIe (U.2 or U.3, both 4x PCIe), SAS (12Gbps) or SATA (6Gbps) in all slots by routing each slot to either to the PCIe switch(es) or the SAS switch(es). Yes, SAS switches offers all the same features (fabric, multiple servers, dynamic load allocation to the "uplink" and so on) to both SAS and SATA disks, they've been in server backplanes for a long time (long before PCIe SSDs were a thing, never mind M.2). It does makes me wonder if some of the PCIe switch chips also does SAS, that would reduce chipcount and make routing far simpler which means they can ask the server builders for more money...
As far as I can tell PCIe has been getting faster at a pretty consistent rate for many years, it's just Intel was a laggard in going to PCIe 4 so gen 3 was around longer and by the time they were on gen 4 we were already close to gen 5. If you look at it in terms of AMD instead gen 4 was around for plenty of time.
@@bosstowndynamics5488 No, 4.0 was seriously late. The official introduction years is 2003, 2007, 2010, 2017, 2019, 2022, (2025 planned) for 1.0 to 7.0 (check official graphs or the Wikipedia article). Note how they all are around 3 year, except one that took 7 year - that's the 3.0 to 4.0 transition and it's a big outlier.
I had thought of this when SSDs first came out. I thought to myself, what happens if storage drives become so fast that they make RAM obsolete. Being able to just read and write information straight off a storage drive has to be faster than going through a RAM middleman.
Well it could be done, however you would need to do "SSDRAM" changes fast as write is the death of the ssd. Depending on workload etc i think in a year you would need to swap it.
The reason why we don't have SSDs replace RAM is that it's a little more than just raw read/write speed. Those reasons are latency and longevity, both of which are affected by how data is written to SSDs. With that said, I'm sure there are highly specialized computers out there that don't have RAM and just use HDDs/SSDs, but they certainly aren't common to consumers.
That's not how it works. The closer you are to the cpu die the better. That's why cpu cache exists, which is essentially "ram" that is located as close to the die as possible. For example, the time to reference L1 cache is 0.5ns. The time it takes to reference RAM is 100ns. That is an absolutely massive difference. You can't only look at raw speeds, you need to factor latency. That's why lower memory latency improves memory performance even if raw speeds remain the same. Often a decrease in latency is much more significant than raw speed.
2:25 "I would suck on your toes for this many drives" don't drop yourself like that Linus 🤣🤣 PS: these jokes come exclusively from love and admiration -please don't drop me from your channel- .
@SABRENT I bought one of your SSDs recently. I am usually brand focused and I did not know your brand before I saw several LTT videos with them. I just wanted to let you know, that your sponsoring here actually works :)
Probably one of the more interesting videos as of late. Clearly I need to do more research on SSD tech, PCIe, and RAID. This kind of went over my head at points.
I'd be hyped if there was a budget version of this. You can get a budget 1tb ssd for ~40€ currently here in Germany, the only issue is I only got so many m.2 slots. This thing would solve that problem.
First thing I thought of was for video editing. A feature film at 8k raw can take up massive drive space and if you collaborate on a project this would be ideal. At about 8 tb's per hour of raw footage this would be ideal.
On the Topic of Hot-Swap M.2: This feature must be supported by the Controller AND the SSD - and the Crucial P3 in the test does not support that. Through my personal testing, I've found not all hot swap feature on M.2 is the same - its basically a hit-or miss (for me at least)
I thought by the nature of its design all PCIe device chips need to support hotswap. The hard part is power switching and mechanical considerations which make m.2 unsuitable.
Well I literally just attached 8 NVMe 2TB drives to my ITX server. I bifurcated the x16 gen3 slot into two x8 slots using bifurcation. In each slot I placed a PEX 8747 board carrying 4 NVMe cards each. Prices of 2TB drives really went down recently. They finally are under $100.
It sound like it dynamically routes pci to where it is needed, It is super cool that it is possible to run any 4 drives at full pci speed or 8 cheaper slower drives at their slower speeds at the same time. Onboard raid controller could be perfect, so that the pci bandwidth only needs to transfer to the drive controller and the drive controller does the mirror instead of using pci bandwidth to mirror.
Great video, would love to see more affordable non-Bifurcation options like the highpoint for comparison. Also thanks for the linux testing as well! Some of us don't use windows 🤣
If I were to have like a private Emby media server with a bunch of these ssd cards I could fit thousands of full shows. And I mean all episodes of all those shows, from start to finish, and still have storage for all of my games. Sheesh that's a lot!
I bought a 1TB Sabrent for my Steam Deck, and it has been great. I'm curiously about why the help the brand has exploded in what seems like less than a month. If the card ends up maintaining its functionality for the advertised lifespan, I will be extremely happy. I just get the "too good to be true" vibe from their products.
@@OdisHarkins when it comes to data storage very rarely gone with any company that wasn't a "household" in the US. I am still very impressed by the performance I've been getting out of the card so far.
@@xamindar Plus they're pretty much just a Phison controller paired with name brand NAND. Hard to make a bad drive when all the hard parts are outsourced to experienced third parties.
It's faster than _a_ stick of 3200MT RAM. Not 2 sticks as you would use in dual channel. As for SLC cache, the Rocket 4 8TB has around 880GB of it per drive, so yeah, if you have 4 drives it'll slow down after you've written 3.5TB, but that's an awful lot of data to need to write at ~24GB/s. That thing still is insanely fast though.
13:30. There are brackets you can buy that let you attach pc fans to them to blow on cards in pci slot. They would actuallyvwork really well for this particular set up
I would LOVE to have one of those in my dev computer. Imagine the gain! Developing 3D games / VR / AR applications can take up a LOT of storage space. And compiling / rendering needs to read / write as much data as the CPU / GPU can handle. Add project management / version control, and both capacity and speed of a card like this become VERY interesting indeed! Of course I can't afford even a basic dev system (not even 4.5K euro), let alone add this card XD
One of my favorite things in LTT videos is Alex being worried about jank things that other people do even though he is the jank master himself. He will be worried about other people doing things and do something 10 times worse a minute later. And I am here for it.
Thats because hes an engineer. That makes him able to do jank safely, ive never seen him fail EVER hahah
@@simenk.r.8237 The fail wasn't in the video itself but in his Intel Extreme Tech Upgrade he mentions how he fried the DIY CPU
@@simenk.r.8237 he's not an engineer. He took a few undergrad engineering classes. FAR from any engineering degree, and miles from being an engineer.
Alex's janky ideas are my favorite LTT videos.
do as I say, not as I do.
Linus holding a 31.000k SSD? This is going to be a wild ride.
At least it's not as likely to break as a hard drive when he drops it!
Just say 31,0000 or 31k bruh.
Saying it with “31 THOUSAND” has a more ✨Dramatic Flare ✨
Casual 31m dollar SSD?
@@rhandycs theyre foreigners and use their commas and decimals backwards and not the way of us in the land of the free
i absolutely love this 'we shouldnt be doing this' dynamic that these two have
I love that Alex somehow managed to bring dodgy cooling to an SSD product review
It's like watching a shopping channel... "We shouldn't use the product like this!" "Oh wow, it really works!"
I wouldn't buy Sabrent ssd, unreliable storage.
Pinky and the Brain!
"Hot plug" refers only to the switch chip itself, m.2 doesn't allow for it. The mechanical interface still has to be designed to ground the SSD before applying power and limit the inrush current.
You sound like you know what you are talking about. I believe you.
@@metaforcesaber About what most people's thought process is watching these vids lmfao
i remember Wendell from L1T talks about this on his nvme hot swap bays video. the whole design doesnt seem to be hot swappable in mind also
Apply for job in LTT
I think Alex did say something like this at 5:36. He didnt fully explain why it doesn't work tho 😶
I remember seeing something similar to this way back in the 80's. They were essentially the first SSDs. It was a board that you'd install a bunch of RAM on. Back then RAM didn't come on DIMMs. It was a bunch of socketed IC chips that looked like EPROM. Then the board would be installed in an ISA slot (it was probably EISA, don't remember). There were also no drivers. The board would present itself to the BIOS as a native storage device, kinda like how ST-506 and ATA did. The BIOS would facilitate communication between the OS and the "drive". It was a lot like MFM and RLL drives that had their own expansion cards. This was all pre-Windows and GUI OSes.
There was a similar card in the 00s that too ddr dims
@Edgar Friendly I've just ordered one of those to finish off my insane Windows XP build. I don't know how practical it's going to be, but I just had to get one 😂
@@josephkarl2061 I used a 4GB Ram disk way back when. The only impractical thing was copying the program over to the ram disk before running and then remembering to copy back when done. I had it down to a script.
These days I'd consider some sort of filesync program (syncthing maybe?) that handles your files to copy back/forth in more real time so you dont even have to remember the second step.
@Z C The plan for the card is to have XP installed on it so Windows will run faster. The Ramdisk is on a PCI card, and even though I've been into computers for a long time now, I never knew the PCI slot will still get power even after shut-down. It does have a backup battery, but we don't get too many power cuts where I am, so I should be good. I'm looking forward to seeing what a difference it makes.
@@Cyhawkx I almost bought one of those. I remember it used a battery to keep the storage alive when shut down.
It could basically fit Windows XP and a few programs lol
I can see the utility of this.
When I was still at uni (back when the fastest way to transfer data was TAs with USB keys), I worked with a professor on a project where he was using datasets that were about 32GB each, and he was having to go through about 100 to map them out. The software , which he made, required it to be loaded into memory, chucked out and recalled later. So this was on the order of 1 test per night on a (at the time) top of the range dual Xeon system. And I successfully got it up to 2 tests per night, hold your applause.
Something like this would have been a Godsend. I mean, I'd have to do more work writing code, so I am glad it wasn't a thing.
Pros of BIOS RAID:
- More "natural" path to installing your OS on a RAID volume.
- Presents the volume to the OS as a single drive (mostly only significant because of the previous point).
Cons of BIOS RAID:
- In almost every case, you need drivers to use the volume in your OS. This is a common pain point during OS install.
- For consumer boards, it's still just a software RAID (see impact in the next 2 points).
- There are almost no performance benefits in regards to CPU usage (tested this many times).
- Data throughput isn't any faster than using an OS software RAID (tested this many times).
- If you ever move your drive(s) to a different system (especially a different brand) that doesn't use the same BIOS RAID, you can't access the data. OS level RAIDs will always be recognized by the OS (unless you toasted your drives somehow).
Do not confuse this for a list comparing hardware vs software RAID because consumer grade BIOS RAID is not hardware RAID.
Hardware RAID is dead anyway...
yeah, i'd rather use zfs raidz1. easy to use, easy to replace broken drive.
That last one can really screw you. "If you ever upgrade your hardware..." 😆 Almost all hardware like this will always get upgraded or break in some way. Doing BIOS RAID is very not smart. Also, if the RAID algorithm is ever slightly bug fixed or improved, you won't get those improvements, because BIOS upgrades stop being released after a few years.
@@timramich for a general consumer, yes it's dead. for enterprise use, it could be beneficial since hardware RAID with write cache usually have battery backup
We run enterprise database without hardware raid. Unless 50% of drives out of -100 in all nodes go Kaput immediately, we wouldn’t even lose data, and we can always bring up from backup and re-ingest the missing data during the downtime to get back up and running.
I love videos with Alex and Linus. Linus loves to do things the janky way, and Alex has an engineering background, so Linus has a hope that Alec will do things the correct way, but then when Alex does thing the janky way like Linus isn't happy, but then things work out, and he's happy again.
Fake video Linus didn't drop it once
He dropped 21 ssd's on the desk at the start! Intentionally but it counts, right?
Lllll
This comment was a rollercoaster ride. Completely correct though.
@@revdarian Probably empty boxes
For the VFX Editor (this may be a simplistic view)
things like Baselight X where uncompressed video with native 8K raster EXRs are used outside of a Proxy workflow for finishing, the sheer size of uncompressed EXRs combined with high frame rate requests now, means your bottle neck is storage speed.
There are bespoke Linux appliances that are used for review and feedback for workflows in VFX where high-speed RAID Cards are used for caching these frames.
This would be ideal for this purpose
I’m not so sure. A ram’s top speed is 20 mph. How could an SSD possibly outclass an animal of such swiftness?
LOL
A RAM has 395hp and 410lb-ft of torque.
@@naoltitude9516😊😊
These ram jokes are great, before I could Dodge the first one I saw a second one lol.
Well... drop the SSD from 60000 feet, it will certainly go supersonic before it reaches thinner air.
Sabrent really going at it lately first with one of the best options for steam deck now searching out large silly projects to sponsor just to show how much progress they as a company has made is just insane. I swear haven't even heard of them till last year
From what I can tell (and some of this is speculation), they started out just selling just cheap generic adapters and usb hub type things, not being a player in the storage market at all, and then started selling some minimal effort generic reference design SSDs... Only, they chose the controller/reference design well so they were pretty decent SSDs at good prices, which managed to catapult them into the limelight, giving them the revenue to put some actual in-house design effort into the things, and now they're a major player.
@@guspaz Your analysis is spot on: Sabrent entered the SSD market in 2018 and are now a reputable brand.
Sabrent has been around since 1998 so they're definitely not a new company
They are the Samsung of 2017
@@arkayngaming727 yeah I bought my first Sabrent gen 3 in 2018 or 2020(?)
my dad says: "if more data is stored in 1 point/place, then its scarier to lose that point."
Imagine being a human that can only exist in one place and time and can't be replaced. Very scary!
Would love to see this thing loaded purely with Intel optane drives, i.e. 118gb p1600x ngff 2280 drives. It would only be ~2TB, of data, but imagine the iops.
wow, isn`t Optane dead??? I tough that mdern nvmes had surassed it
@@Yamagatabr they haven't
@@NadeemAhmed-nv2br why did the project get killed off then?
@@bosermann4963 Nobody bought it. Most consumers did not and still do not need Optane and it is a lot more expensive than normal m.2.
They should get a bunch of P5800Xs and do it with them. Those drives are fast as hell.
Linus is doing this for more than a decade, you would think his enthusiasm will be gone by now but no, it increased by a ton. This is why LTT is so cool!
To be fair this is a rapidly changing industry. It's kinda hard to get bored of it
@@Sarcasshole yep I was gonna say the same thing. If your in the field that requires messing with lots of different set ups you’ll never be bored. It’s quite amazing how much stuff changes per year.
Why would u lose interest about something u are passionate about? I've been playing pc since the 90s, still gaming today, should I stop? I like women since I was a child, so I should change to men now as an adult? What kind of logic is that? I was passionate in science as a child, I should stop liking science now? By that logic a doctor loving their job should change their career to become a farmer, because they lost enthusiasm.
0:47
5:10 I don't know why but I felt a chill down my spine of someone out there jaw dropping at this zoomed out moment.
I'm happy this has come up as a topic of conversation. I've been wanting someone to test SSD speed to say yesteryear RAM versions like DDR3 DDR2 etc. to see if SSDs using swap files are at parity with RAM
Imagine the day when SSD's become as fast as modern ram then ram can be deleted from a PC.
@@SpruceMoose-iv8un you're not thinking big enough: you could open a game, unplug your computer, and then turn it back on and keep playing where you left off. Volatile memory holds us back from a lot of stuff.
Access times are about 10µs, whereas RAM is about 10ns, so about 1000 times slower. RAID improves data transfer rates but doesn't do anything for access time.
Intel's 3D xpoint was the closest thing to a NAND and RAM hybrid, too bad it didn't take off
it would be cool if someone made some hardware that use lots of GDDR6X ram-modules as storage, and had some sort of batterybackup when the computer is off, to keep the data. (and then running them in Raid :D
This is perfect for data centers! I would love something like this for computer vision models! But I am sure we need motherboards with a lot of pcie lanes, like on the thread ripper.
A lot easier with u.2 or EX.x in the datacenter for hot swap and cooling. And sapphire rapids/EPYC have a lot more PCIE lanes, I.E. it already exists. Not to take away from how cool this card is mind you.
@@nadtz Yeah, this card is rather niche. It strikes me as something you'd use when you needed the space but couldn't use a full rack mount for whatever reason.
@@DFX2KX The other advantage of this card is that it takes completely standard PCIe drives, so you can buy them off the shelf, rather than having to pay enterprise prices for U.2 etc.
Or I suppose if you couldn't get the budget to upgrade an older server, but it had a spare PCIe slot and you wanted to really spice up it's storage...
@@nadtz E1.L doubles a neat intruder protection.
Its definitely not good for enterprise use tbh. Those drives only have like 3800 tbw rating
I could totally see this device being used in massive weather simulations where you need to store the values of the atmospheric conditions within individual blocks of data and need the fastest access possible to that data. Being able to store the entirety of the information contained within a storm on a single device would prove invaluable to meteorologists, especially Dr. Orr in Minnesota who's been simulating tornadoes in his supercomputer.
I’m a met student starting in the fall and I was thinking similarly.
noice
I just want this for the small form factor...
I'm in love with this thing because of NAS storage stuff. Sure, this card is really expensive and might be overkill but having 21 SSDs gives you a lot in terms of fault tolerance (RAID5 or RAID6) while also consuming less space.
I'm excited to see where this is going!
supercloud, metaverse, digital convergence, whatever you wanna call it
It's very cool and I would love to have one but it would be cheaper to use 4 way passive cards in Xeon W or Epyc (including an upgrade to Xeon W - the card alone is 3 grand) if you only wanted one card worth of SSDs, it would also give you better cooling since those cards have decent heatsinks
You can even get 4tb crucial nvme drives at best buy for $200 right now, so i could totally see this being in a consumer media server. If a drive fails itd take like an hour to rebuild from parity instead of like a week with hdd
@mtrebtreboschi5722 still, for the price of one 4tb SSD, I could get roughly two 4TB HDDs or one 8TB HDD, which, in most cases, is more than enough.
But yes, you're right: with the trend being that drives get bigger, faster, and cheaper per TB generally, it'll most likely be a thing of the near future.
@@playeronthebeat yeah hdds are about at least 3x cheaper per terabyte than any high capacity ssd, seagate exos drives often go on sale for $14 per tb.
SSDs have the advantage in size, power consumption, rebuild time, and read-only longevity, which will all mostly be negligible for most people, but I guess if somebody has the money, this is the way to do it
We finally found a use for the Radeon VII: A measuring stick
Bro could have 66,945,606 (66 million) copies of the original doom
Underrated comment
Sheesh
I was actually expecting something jank when Alex showed up, but just 21 SSDs on a PCIe port surprisingly tame. With that said, I wonder how much of the performance would be retained if this SSD was used on a PS5.
Alex and no jank? I’m disappointed
Edit: got to the cooling part. I am now satisfied.
How would you even connect it to PS5? It’s not like you can throw a riser there. Or do you mean If Sony used one?
@@Simon_Denmark M.2 to PCIe risers do actually exist... you just need to figure out a way to power them
I can tell you wrote that before they got to the cooling section. 😅
Think he meant just one of those Sabrent ssds not the whole pair card
the insane part is how much of a "great fit" and "bargain price" this could be considered for high-end enterprise and workstation systems working on big datasets. This density of performance and storage without "custom" drives is unrivaled
Machine learning is definitely going to be a big use case. Even sample datasets we use, given for educational purposes and are not meant to be very challenging , are easily 500GB in size sometimes. And they relatively small datasets since they really do not have that many different variables. Now imagine having to handle literally petabytes of data.
You know this just makes sense having cards like this with chips that can handle all of this. I can not imagine how much this could help once pcie 5 really starts to come around in the data center. If they come up with a card to be able to double the amount of drives using pcie 3 with the same card effectively that would keep more out of landfills long term. Love this techy
You are much more likely to see ex.x or u.2 in datacenters. Imagine how much fun having to swap one of these drives out would be compared to walking over and replacing the drive sled with the blinking light.
@@nadtz
That's just an engineering problem though, I could imagine seeing a drive sled that contains 4 mini sleds (I name them Slugs™️) each with their own sub blinky blink.
@@MostlyPennyCat Not exactly sure why when the solution already exists.
Thanks SABRENT for making this happen!
They don't care about you 😂
@@Azmodaeus49 True but I got to watch a cool video for free so I'm not complaining
@@Azmodaeus49 forgot to take your happy pills again?
@@Azmodaeus49 true in more ways than one, their customer service is bad apperently
how to build worlds most expencive toaster :D
The X21 is more than just a one-trick pony. The M.2 slot supports a number of other devices. For instance, you could use wireless cards with M.2 A+E converter boards to set each wireless card to a specific wireless channel for spectrum monitoring. There’s even an AI angle here. A company called Axelera makes an M.2 AI Edge accelerator module that could be used by the Apex card at some point as well.
Or the Coral TPU could also fit on this.
*Slaps top of card*
This bad boy can fit so much AI processing
I love how we can just mess with funny brain replicas for shits and giggles
@@puerlatinophilus3037 we have one of the other 2* and are looking at those accelerators in it right now.
@@JordansTechJunk Meanwhile, here I was just thinking of using 21 Asus pce-AC88 wireless cards to pci m.2 adapters and running all 21 wireless cards in parallel for a grand total of a theoretical internet speeds of 44.1 Gb/s upload and download. Do you think that's too much RF radiation? 😁
@@luminatrixfanfiction I who know nothing really, could speculate about interference making it not work.
Not good enough anyway.
But then again, with the right hardware and software that may not be a thing to worry about, I am simply speculating.
its crazy to think just a few years ago Sabrent was the "try it if you want" brand I bought a 240gb drive probably 5 years ago and I was skeptical but damn they are on my list of good brands for M.2s
What are your thoughts on TeamGroup?
@Valor_X teamgroup is not bad. I think they are similar to silicon power, but still a little early i think.
May be better.
my sabrent drives died within a year. WD is better.
@@passmelers samsung is the best
@@isaacbejjani5116 nah. hynix, micron and intel are the best at least from my opinion. idk about their consumer grade SSDs but some of Samsung's enterprise SSDs are just.. problematic. early PM863, PM883, PM1733 seems having issues with its firmware
Imagine watching this video 10-20 years from now, and you can fit that amount of data in your pocket. Tech is so wild, I'll never get enough.
13:49 there is something so hilariously silly and simple about Linus using cardboard in a cooling solution.
Man, I remember back in the day running two of the smaller Raptor drives (32gb?) in Raid 0 and just loving life. I may not have had the best graphics card or processor, but I loaded into BF2 faster than any of my friends for a while. What a time to be alive.
Always first to GET TO THE CHOPPA, remember.
I was so proud of my raptor. Better days.
My raptor 600 is still going, but I use it as a temp/cache drive because I'm afraid it might reach EOL at any time
My colleague had one and I remember my ignorant brain saying 14,7GB 15k SCSI drives in RAID are better. Never mind the 2K setup costs.
Imagine using this as vram
Sabrent just asking for a shoutout and casually sending that many 8TB SSD over is a big W move! They truly support Linus craziness and we all benefit from it! :D
I could see this working well for engineering simulations (FEA, CFD) that are effectively limited by the amount of ram that you have. Before something like this, maxing out the ram and letting the solver use the hard drive would increase solve time by something like 10x.
But you would most likely use DCPMM for that because it has much lower latency. You could probably use this for some kind of custom swap in some cases, but at some point there's diminishing returns. The advantage of this card is that it's compact. There are systems that have lots and lots of u.2 connections and can support far greater capacity and speed.
The newest simulation machines where I work are for chip cooler simulations. They use CXL RAM as a layer between the SSDs and RAM. We also have sapphire Rapids HBM in dual-socket for it as well.
This would still be slower, 25Gb/s on ram is per channel, if you are running simulations that intense I would think you would have a very dece amount of ram and be running it in 8 channel. This is cheaper for sure but I don’t think the OS is designed to use swap like that, I think it still goes through ram first and constantly dumps back and forth from ram to disk increasing your CPU usage drastically. This would however be very useful in AI training, the datasets are massive and gpus nowadays especially if you have like 4 A6000s could eat up quite a few IOPs. I would like to see a water cooled version of this, maybe they will partner up with an ssd manufacturer and offer that later as a factory assembled package.
@@radugrigoras: There's nothing particularly special about this. It sacrifices speed for compactness is all. So it's great for a system with few PCIe lanes or systems that need to be small. There are very few situations where this is something you'd even care to have.
I haven’t done this kind of work since 2013 (finite difference simulations on a 192gb, dual 8 core Xeon machine) so I’m guessing things have changed and gotten better in that time span.
Just imagine how insane the amount of storage you'd have with multiples of these! Its crazy that with only a couple of these and you're already at a petabyte, and in such a tiny amount of space. Crazy.
I always really love Alex's solutions, they're why I always get excited for a new video with him in it.
14:07 (insert meme) when your ssd is faster than your ram memory...
Those chips are used quite a bit in NAS and storage backups in my enterprise environment. Dell Avamar and Isilon units will connect to switch fabrics with those chips. Very fast stuff!
I've been looking for some solutions for backing up my personal files and media collection. Can you recommend any resources that will help me do that? I have older stuff on old nearly-dead HDDs that have been removed from their systems, but that's clearly not a comfortable, current, or reliable storage method
@@Thalanox why not just build a NAS using TrueNAS or just ZFS as a base. It's very well documented, easy and can be run on a old system. Just grab a bunch of used certified drives, and a old PC and swap the power supply for something decent and done pretty much. Set up a RAID array and put the files on it and call it a day. Drive fails? you're still covered
@@Thalanox electrons escape gates, so... keep using you backups on SSDs
@@Thalanox If you want something enterprise grade get a dell r730. It's older, but will do pcie bifurcation and supports nvme
If you asked me 15 years ago where storage speeds could be I wouldn't have guessed this fast... It's hard for me to be super excited because I feel like the consumer application isn't really there... but the use case in cloud computing will be huge and we'll see it's affects in our services. It's interesting how obfuscated this technology is for general consumers even though we'll all see benefits.
I remember being impressed when early Sata 2 SSD's were breaking 200MB/sec read speeds and people would Raid 0 them for 400+MB/sec
I'm still super impressed at my Gen 3 NVME 3,500MB/sec speeds... how far tech has come
@@Argedis yea, Gen 4 drives are so fast nowadays that there's really no reason to raid 0 for most people
Server motherboards already have a fak ton of ssd slots built-in. I built 2 servers like that 7 years ago with 10 Intel sata ssd (consumer grade) for my job. I don't remember the exact performance (probably 10 x sata speed) but it was insane for the relatively low cost. We put 40GBps network cards and a 40GBps switch between them and built one storage array spanning the two servers. It was a proof of concept for a cheap high availability hyper-v cluster. Good times.
Yeah we need this in consumer hands. FFS I still have 2 mechanical drives in my computer and like 5+ HDD for storage because they have big capacities and cheap... even if way slower.
18:30 The Anthony phone-a-friend lifeline is my favorite thing ever
i miss him
12:25 55°C is apparently the sweet spot for nand flash, it doesn't like being colder or hotter
Colder = increased data retention, write wear
Hotter = increased hardware degradation
Anywhere I can read up on that?
@@pandozmc I think I first read this on a EKWB site but now I can't find it.
After a bit of googling I found this anandtech article from 2015 back when SSD data retention periods was becoming a concern.
I should note that my memory isn't quite accurate, so if you do look this up feel free to ignore what I said in previous comment.
here's what anandtech says :
_The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data)._
_For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate._
That is insane. As a database engineer I want one of these things.
168 terabytes. My man is trying download the entire steam library
Oh Linus not the TOES! 2:25
😭🙏
I could probaly see a card like that for bio-informatics use case. Wholegenome sequencing datasets can be multiple TB of data, a card like that would 100% speed up analysis of such masive amount of data
This would be great for a Virtual Computer center - I’m biased towards education - having Lab VMs available on demand across a LAN would be fantastic.
I’m only part way through watching as I post this, but I would be very interested in IOPS on a new heavy access load.
Aaaaand… you didn’t let me down! You did the IOPS. Writes in a RAID will always cause a performance hit (especially because you won’t be able to use single parity bit and will incur extra cost on >8TB - which this definitely is!), but if I was using this it would be for VMs and Data with a heavier READ profile and it would be pretty cool.
Clicked on this video so fast that even the SSD couldn't keep up 😅
First
The Ads made me slow.
Faster than a Honey Badger I gather
Linus: I will suck on your toes for this many drives.
Sabrent: Send it!
😂
Yvonne would get jealous no way jose
16:15 if i remember correctly, when task manager says a disk is at 100%, that just means it was being actively written to 100% of the time since the last tick that it measured.
Now if only Optane was still around! :D
The AI thing does make sense especially when combined with DirectStorage-type tech.
it doesn't, because most datasets are precomputed and cached
@@MrNoipe How does being precomputed and cached negate Direct Storage benefits?
@''/ad cached.... you mean stored on a massive bank of drives? Beyond that, there's still computing in order to mesh data together to create something new... this is where quick access storage helps. Being able to access multiple pieces of data quickly means you're only limited by computing resources, not storage resources which tends to be an issue.
@@MrNoipe it makes sense when training the ai
Over 10 years ago, before discovering Netflix back when it was good, I would be asked by my family to, find, and host entire seasons of their favorite shows as well as some of their favorite movies. I used to spend ages burning custom DVDs, but we were in the future with a 1080 HDTV and our Xbox 360 could connect to my PC using windows media center turning it into an HD streaming home server on the side. This meant a lot of time downloading overnight, and my 1TB-ish hard drive with more storage than I thought I would ever need was starting to actually visibly fill up, as I also had many multi-gig games as well as rips of the disc images for easily swapping into an emulated drive without swapping/scratching physical media, meaning my physical games needed space for the install and ISO, as well as gigabytes of music in my "My Little Pony Fan Music/Remixes" folder alone. Basically it was a well-loved PC that I volunteered my extra space to turn into a windows media center HD video streaming server for my family on the side.
Now Netflix doesn't have all of the good shows anymore, and it is so expensive and inconvenient to be able to stream all of the shows you like as you juggle tons of account credentials, and it is getting harder if not impossible to have an ad-free experience no matter how much money you throw at them.
So what if I wanted the entire library of every good show and movie stored locally in HD, or even 4K as we just upgraded a few months back? I don't want TV myself but I doubt the streaming services or our ISP are letting us stream much if any 4K content despite us having the fastest internet package in our area and I purchased a 4k streaming capable router last time we needed one so we could be 4K ready when the time came.
Well, acquiring thousands of hours of TV and movies in the best quality available is its own beast, but should I ever win the lottery at least I know there's a product that can theoretically work with a home machine, with a modular nature so extra storage can be added as needed instead of having to buy a larger drive and copy the files from the old one, and also doesn't require that you buy $30k+ worth of SSDs up front and just add more as needed. Basically, if you're willing to spend your time ripping a massive collection of 4K blu-rays and whatever else it takes to get your hands on every episode of every show everyone in your household loves, and the movies too, in the highest quality available, you can make your own media server with no ads or switching between 3 different services to watch every Star Trek series and movie. No "we can't watch this because my parents in the next state are on the account right now"
It would be a huge money and time investment to get the card, drives, and all of the media, but it is a lot more technically feasible now to have your own HD/4K media center just by adding an extra card to a regular PC.
Thanks to you Linus, I'm building my first PC this week!
Would be nice to see a scaled down version of this, for those who want this kind of thing but without quite so much cost or overkill.
That commercial chiller that you have for your lab should be used with this on a milled cooling block just for this application. See what you could do as far demos go. You might be able to see those numbers climb higher.
honestly if this gets affordable for the average consumer, this could be an absolute game changer for any system and game developers
just to be clear, bios raid limitation does not matter for anyone in professional space.
Raid is basically broken for anything serious - it can detect if drive is dead and then migrate to spare one (in case of something like raid5)
but the issue is when drive is not yet dead, it's failing - in that case it might corrupt your data.
that's why for any serious data storage you want to use something like ZFS so it's able to check checksums of your data and detect that something was corrupted on drive and fix it on the fly
what would be also useful use case that such amount of iops? ZFS deduplication, that required to add another layer before accessing data -> you want to get data block X, then you need to read from deduplication map, and get read addres for data Y and then access that data. This basically halves amount of IOPS you can do (assuming no cache hits in memory) that's why it's not used that often.
In this configuration you could get a much more effective storage when expecting duplicated data
I haven't built a computer in 20 years .. and I feel like I know absolutely nothing now.
Well, Linus guys fall for this shit some years ago... search on channel "Our data is GONE... Again - Petabyte Project Recovery"
They Rely solely on ZFS to handle disks and they lose.
Uhhh... I'm sorry did I just see a Leather LTT backpack?? @9:07
I love how they included that clip of shocking hardware to test if it can be killed while Linus was holding it with his thumb on the slot contacts. You just know someone had commented how he was going to zap the board.
Edit: oh man, this just reminded me of something I had back in the IBM XT days. Back then I had a 20 mb hard drive built into an ISA card, aka a Hard Card.
Love it when people go "hey! that's a bottleneck" to setups they will never have and could never afford
This was a year ago and Linus was right. These cards did catch on a little, and they work. I put another brands PCIe 4.0 card in an older computer, stuck 2 M.2 drives on it, and it works great. I cannot tell they're on a PCIe board and not on the mobo. Worked great!
Linus: "Under normal circumstances, you wouldn't do something dumb like configure 21 drives in a RAID0"
Also Linus: Heh heh, you wanna do a RAID0?
This product is bananas, I'm gonna have to watch this vid a few times more.
Me: *buys one SSD*
Linus: *buys a pcie SSD card the size of a graphics card*
cost: close to 30 nvidia rtx 4090
1:34 im not expert, but these SSD bend is not looks healthy
Having two of those cards, each RAID-0 running as a RAID-1 would be pretty much perfect setup for a big database server. Combine those cards with say 1 TB of RAM and you can execute huge queries very rapidly against a 100 TB database.
You might need a lot of CPUs or GPUs as well, to actually move all of that data around.
Or perhaps those newfangled Data Processing Units, whatever those are.
fart hehe
That's called RAID-10 😂
@@exorr81 That's true. I explained that as a combination of RAID-0 and RAID-1 for two reasons:
(1) RAID-0 and RAID-1 are easier to understand for most people than yet another RAID mode.
(2) If you configure such setup as RAID-10 directly, changes are higher that you mess in a way that causes the whole RAID to be offline if one of those cards fails. (That is, you configure stripes and mirrors incorrectly over all 42 devices.)
I was going to get a 2nd Synology and put 8TB SSDs into it. But it looks like m.2 storage is being taken seriously as an option for RAID. I'd much rather go with NVMe if it can be a comparable price.
You guys are doing the kind of thing that I always wanted, but never had the money
Linus's face when Alex mentioned chia mining
😂😂😂
The primary use for these PCIe switches are actually in servers - when there's lots of SSDs in a server backplane they're sometimes connected directly, but often there's one or more big PCI-e switches driving the slots. And each switch will have a 16x or 32x (yes, this is a thing even if the standard only lists up to 16x) PCIe 4.0 or 5.0 uplink. This is the real reason why PCIe link speeds are again trending upwards quickly after a long hiatus (between 3.0 and 4.0), there's finally a user (servers) that are willing to pay for development (there's not enough entuast to pay for it by orders of magnitude).
And I assume the reason the card doesn't have any fans is because they expect it to end up in servers where there's a large amount of airflow provided by the server itself (sometimes capable of cooling 400W GPU or AI accelerator with just the servers airflow, no fan on the card.
It's also unfortunately why PCIe switches disappeared from consumer motherboards (many early SLI motherboards had them), the PCIe switch vendors jacked up the prices by 10x because most were sold to server vendors and they were willing to pay.
"Better" (IE expensive) server backplane often accept PCIe (U.2 or U.3, both 4x PCIe), SAS (12Gbps) or SATA (6Gbps) in all slots by routing each slot to either to the PCIe switch(es) or the SAS switch(es). Yes, SAS switches offers all the same features (fabric, multiple servers, dynamic load allocation to the "uplink" and so on) to both SAS and SATA disks, they've been in server backplanes for a long time (long before PCIe SSDs were a thing, never mind M.2). It does makes me wonder if some of the PCIe switch chips also does SAS, that would reduce chipcount and make routing far simpler which means they can ask the server builders for more money...
As far as I can tell PCIe has been getting faster at a pretty consistent rate for many years, it's just Intel was a laggard in going to PCIe 4 so gen 3 was around longer and by the time they were on gen 4 we were already close to gen 5. If you look at it in terms of AMD instead gen 4 was around for plenty of time.
@@bosstowndynamics5488 No, 4.0 was seriously late. The official introduction years is 2003, 2007, 2010, 2017, 2019, 2022, (2025 planned) for 1.0 to 7.0 (check official graphs or the Wikipedia article). Note how they all are around 3 year, except one that took 7 year - that's the 3.0 to 4.0 transition and it's a big outlier.
I had thought of this when SSDs first came out. I thought to myself, what happens if storage drives become so fast that they make RAM obsolete. Being able to just read and write information straight off a storage drive has to be faster than going through a RAM middleman.
Well it could be done, however you would need to do "SSDRAM" changes fast as write is the death of the ssd. Depending on workload etc i think in a year you would need to swap it.
The reason why we don't have SSDs replace RAM is that it's a little more than just raw read/write speed. Those reasons are latency and longevity, both of which are affected by how data is written to SSDs.
With that said, I'm sure there are highly specialized computers out there that don't have RAM and just use HDDs/SSDs, but they certainly aren't common to consumers.
That's not how it works. The closer you are to the cpu die the better. That's why cpu cache exists, which is essentially "ram" that is located as close to the die as possible. For example, the time to reference L1 cache is 0.5ns. The time it takes to reference RAM is 100ns. That is an absolutely massive difference. You can't only look at raw speeds, you need to factor latency. That's why lower memory latency improves memory performance even if raw speeds remain the same. Often a decrease in latency is much more significant than raw speed.
2:25 "I would suck on your toes for this many drives" don't drop yourself like that Linus 🤣🤣
PS: these jokes come exclusively from love and admiration -please don't drop me from your channel- .
Finally a solution to the 2GB DDR2 ram dilema. (you can only have max 4GB on a 2 slot moteboard)
DDR2 is still much faster because it has much lower latency.
@@marcogenovesi8570 Wait, why is anyone using DDR2?
Does DDR2 have any advantage over DDR4/DDR5 that I am missing?
@@hubertnnn: I have some old systems running with DDR2. Work just fine. :)
@@marcogenovesi8570 Ok, I thought by "more recent" you meant, that they made a new DDR2 last year
12:00 I LOVE THAT BOARD
A PCI 5 Version would be very nice
@SABRENT I bought one of your SSDs recently. I am usually brand focused and I did not know your brand before I saw several LTT videos with them. I just wanted to let you know, that your sponsoring here actually works :)
21 SSD's?
21, can you do sum for me
Can you hit a lil rich flex for me
Probably one of the more interesting videos as of late. Clearly I need to do more research on SSD tech, PCIe, and RAID. This kind of went over my head at points.
I'd be hyped if there was a budget version of this. You can get a budget 1tb ssd for ~40€ currently here in Germany, the only issue is I only got so many m.2 slots. This thing would solve that problem.
try this: B07T3RMFFT -- should do the trick
Turbo budget version, it's just a pcie to four m2 slots card. No logic, just wiring
Yep, just looked, they're already on eBay for about £25
About £50 for a 1tb crucial nvme M2, are those considered budget drives or middling?
Those ebay cards need the mobo to support bifurcation, so not the same thing as this Apex card that foes not.
First thing I thought of was for video editing. A feature film at 8k raw can take up massive drive space and if you collaborate on a project this would be ideal. At about 8 tb's per hour of raw footage this would be ideal.
Some 30k USD in movie budget is minimal investment. And it still exist when film is done.
On the Topic of Hot-Swap M.2: This feature must be supported by the Controller AND the SSD - and the Crucial P3 in the test does not support that.
Through my personal testing, I've found not all hot swap feature on M.2 is the same - its basically a hit-or miss (for me at least)
I thought by the nature of its design all PCIe device chips need to support hotswap. The hard part is power switching and mechanical considerations which make m.2 unsuitable.
Well I literally just attached 8 NVMe 2TB drives to my ITX server. I bifurcated the x16 gen3 slot into two x8 slots using bifurcation. In each slot I placed a PEX 8747 board carrying 4 NVMe cards each.
Prices of 2TB drives really went down recently. They finally are under $100.
Where can you get a 2tb nvme for under $100, what brand if you don't mind me asking?
@@Neopopulist my comment got deleted. In summary, Silicon Power.
It sound like it dynamically routes pci to where it is needed, It is super cool that it is possible to run any 4 drives at full pci speed or 8 cheaper slower drives at their slower speeds at the same time.
Onboard raid controller could be perfect, so that the pci bandwidth only needs to transfer to the drive controller and the drive controller does the mirror instead of using pci bandwidth to mirror.
2:17 Undisclosed sponsorship.
5:49 Me choosing the video at night
Great video, would love to see more affordable non-Bifurcation options like the highpoint for comparison.
Also thanks for the linux testing as well! Some of us don't use windows 🤣
you know it's fast when Blu-Rays/second is a legit unit of measurement for the bandwidth
I want to see it with 21 Optane modules. Not for the balls to the walls throughput but the IOPS, low cue depth power and latency.
I think 4 M.2 drives in a single 16x slot is the sweet spot.
Maybe 5 drives with a raid 5 controller.
20:48 As a crypto miner and a Chia Farmer he read my mine, I swear the whole time I was calculating how much plots i could create at once
I think this could also be used for high output high input AI data processing.
Yup.
Especially with the new GPU direct read/write stuff, I forget what it's called.
Yes, in many ML applications :)
me in the back when my friend is talking too much 8:18
If I were to have like a private Emby media server with a bunch of these ssd cards I could fit thousands of full shows.
And I mean all episodes of all those shows, from start to finish, and still have storage for all of my games. Sheesh that's a lot!
I was thinking the exact same thing.
(Puts eyepatch in pocket)
I bought a 1TB Sabrent for my Steam Deck, and it has been great. I'm curiously about why the help the brand has exploded in what seems like less than a month.
If the card ends up maintaining its functionality for the advertised lifespan, I will be extremely happy. I just get the "too good to be true" vibe from their products.
Still running my Rocket 3 years later, they make a nice drive!
@@OdisHarkins when it comes to data storage very rarely gone with any company that wasn't a "household" in the US.
I am still very impressed by the performance I've been getting out of the card so far.
It's not like their drives are the cheapest out there or anything. So no reason they would be "to good to be true".
@@xamindar Plus they're pretty much just a Phison controller paired with name brand NAND. Hard to make a bad drive when all the hard parts are outsourced to experienced third parties.
It's faster than _a_ stick of 3200MT RAM. Not 2 sticks as you would use in dual channel.
As for SLC cache, the Rocket 4 8TB has around 880GB of it per drive, so yeah, if you have 4 drives it'll slow down after you've written 3.5TB, but that's an awful lot of data to need to write at ~24GB/s.
That thing still is insanely fast though.
13:30. There are brackets you can buy that let you attach pc fans to them to blow on cards in pci slot. They would actuallyvwork really well for this particular set up
I would LOVE to have one of those in my dev computer. Imagine the gain! Developing 3D games / VR / AR applications can take up a LOT of storage space. And compiling / rendering needs to read / write as much data as the CPU / GPU can handle. Add project management / version control, and both capacity and speed of a card like this become VERY interesting indeed!
Of course I can't afford even a basic dev system (not even 4.5K euro), let alone add this card XD
I know! It makes drool. I caught myself considering selling my car, 😂
Guess that beats what is currently on the market even 6 months later. This product is still on pre order.
sabrent is absolutely insane for sending this madman these ssd for free
couldn´t you use an integrated GPU and then use the leftover slot to use 42 ssd´s total?
$62k ☠️