This SSD is Faster Than Your RAM - Apex Storage X21

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 3,3 тыс.

  • @noahhallman9174
    @noahhallman9174 Год назад +2616

    One of my favorite things in LTT videos is Alex being worried about jank things that other people do even though he is the jank master himself. He will be worried about other people doing things and do something 10 times worse a minute later. And I am here for it.

    • @simenk.r.8237
      @simenk.r.8237 Год назад +130

      Thats because hes an engineer. That makes him able to do jank safely, ive never seen him fail EVER hahah

    • @theisaacpigg27_32
      @theisaacpigg27_32 Год назад +47

      @@simenk.r.8237 The fail wasn't in the video itself but in his Intel Extreme Tech Upgrade he mentions how he fried the DIY CPU

    • @littlejackalo5326
      @littlejackalo5326 Год назад +40

      ​@@simenk.r.8237 he's not an engineer. He took a few undergrad engineering classes. FAR from any engineering degree, and miles from being an engineer.

    • @SDMasterYoda
      @SDMasterYoda Год назад +16

      Alex's janky ideas are my favorite LTT videos.

    • @StormHawksHD
      @StormHawksHD Год назад +4

      do as I say, not as I do.

  • @raistlinmajere1000
    @raistlinmajere1000 Год назад +6801

    Linus holding a 31.000k SSD? This is going to be a wild ride.

    • @krzychhoo
      @krzychhoo Год назад +230

      At least it's not as likely to break as a hard drive when he drops it!

    • @nayan.punekar
      @nayan.punekar Год назад +420

      Just say 31,0000 or 31k bruh.

    • @rapidfuge
      @rapidfuge Год назад +71

      Saying it with “31 THOUSAND” has a more ✨Dramatic Flare ✨

    • @rhandycs
      @rhandycs Год назад +184

      Casual 31m dollar SSD?

    • @v1Broadcaster
      @v1Broadcaster Год назад +52

      @@rhandycs theyre foreigners and use their commas and decimals backwards and not the way of us in the land of the free

  • @andywallis6154
    @andywallis6154 Год назад +792

    i absolutely love this 'we shouldnt be doing this' dynamic that these two have

    • @loicvanderwielen
      @loicvanderwielen Год назад +15

      I love that Alex somehow managed to bring dodgy cooling to an SSD product review

    • @veduci22
      @veduci22 Год назад +4

      It's like watching a shopping channel... "We shouldn't use the product like this!" "Oh wow, it really works!"

    • @fynkozari9271
      @fynkozari9271 Год назад

      I wouldn't buy Sabrent ssd, unreliable storage.

    • @KaizokuSencho
      @KaizokuSencho Год назад

      Pinky and the Brain!

  • @shanent5793
    @shanent5793 Год назад +702

    "Hot plug" refers only to the switch chip itself, m.2 doesn't allow for it. The mechanical interface still has to be designed to ground the SSD before applying power and limit the inrush current.

    • @metaforcesaber
      @metaforcesaber Год назад +115

      You sound like you know what you are talking about. I believe you.

    • @darkdragonx650
      @darkdragonx650 Год назад +45

      @@metaforcesaber About what most people's thought process is watching these vids lmfao

    • @n.shiina8798
      @n.shiina8798 Год назад +10

      i remember Wendell from L1T talks about this on his nvme hot swap bays video. the whole design doesnt seem to be hot swappable in mind also

    • @tzshzr
      @tzshzr Год назад +5

      Apply for job in LTT

    • @klaik30
      @klaik30 Год назад

      I think Alex did say something like this at 5:36. He didnt fully explain why it doesn't work tho 😶

  • @david_sanchez
    @david_sanchez Год назад +964

    I remember seeing something similar to this way back in the 80's. They were essentially the first SSDs. It was a board that you'd install a bunch of RAM on. Back then RAM didn't come on DIMMs. It was a bunch of socketed IC chips that looked like EPROM. Then the board would be installed in an ISA slot (it was probably EISA, don't remember). There were also no drivers. The board would present itself to the BIOS as a native storage device, kinda like how ST-506 and ATA did. The BIOS would facilitate communication between the OS and the "drive". It was a lot like MFM and RLL drives that had their own expansion cards. This was all pre-Windows and GUI OSes.

    • @edgarfriendly666
      @edgarfriendly666 Год назад +43

      There was a similar card in the 00s that too ddr dims

    • @josephkarl2061
      @josephkarl2061 Год назад +15

      ​@Edgar Friendly I've just ordered one of those to finish off my insane Windows XP build. I don't know how practical it's going to be, but I just had to get one 😂

    • @Cyhawkx
      @Cyhawkx Год назад +19

      @@josephkarl2061 I used a 4GB Ram disk way back when. The only impractical thing was copying the program over to the ram disk before running and then remembering to copy back when done. I had it down to a script.
      These days I'd consider some sort of filesync program (syncthing maybe?) that handles your files to copy back/forth in more real time so you dont even have to remember the second step.

    • @josephkarl2061
      @josephkarl2061 Год назад +3

      @Z C The plan for the card is to have XP installed on it so Windows will run faster. The Ramdisk is on a PCI card, and even though I've been into computers for a long time now, I never knew the PCI slot will still get power even after shut-down. It does have a backup battery, but we don't get too many power cuts where I am, so I should be good. I'm looking forward to seeing what a difference it makes.

    • @Argedis
      @Argedis Год назад +8

      @@Cyhawkx I almost bought one of those. I remember it used a battery to keep the storage alive when shut down.
      It could basically fit Windows XP and a few programs lol

  • @smalltime0
    @smalltime0 Год назад +68

    I can see the utility of this.
    When I was still at uni (back when the fastest way to transfer data was TAs with USB keys), I worked with a professor on a project where he was using datasets that were about 32GB each, and he was having to go through about 100 to map them out. The software , which he made, required it to be loaded into memory, chucked out and recalled later. So this was on the order of 1 test per night on a (at the time) top of the range dual Xeon system. And I successfully got it up to 2 tests per night, hold your applause.
    Something like this would have been a Godsend. I mean, I'd have to do more work writing code, so I am glad it wasn't a thing.

  • @ragtop63
    @ragtop63 Год назад +420

    Pros of BIOS RAID:
    - More "natural" path to installing your OS on a RAID volume.
    - Presents the volume to the OS as a single drive (mostly only significant because of the previous point).
    Cons of BIOS RAID:
    - In almost every case, you need drivers to use the volume in your OS. This is a common pain point during OS install.
    - For consumer boards, it's still just a software RAID (see impact in the next 2 points).
    - There are almost no performance benefits in regards to CPU usage (tested this many times).
    - Data throughput isn't any faster than using an OS software RAID (tested this many times).
    - If you ever move your drive(s) to a different system (especially a different brand) that doesn't use the same BIOS RAID, you can't access the data. OS level RAIDs will always be recognized by the OS (unless you toasted your drives somehow).
    Do not confuse this for a list comparing hardware vs software RAID because consumer grade BIOS RAID is not hardware RAID.

    • @timramich
      @timramich Год назад +23

      Hardware RAID is dead anyway...

    • @williamschnl
      @williamschnl Год назад +6

      yeah, i'd rather use zfs raidz1. easy to use, easy to replace broken drive.

    • @fitybux4664
      @fitybux4664 Год назад +13

      That last one can really screw you. "If you ever upgrade your hardware..." 😆 Almost all hardware like this will always get upgraded or break in some way. Doing BIOS RAID is very not smart. Also, if the RAID algorithm is ever slightly bug fixed or improved, you won't get those improvements, because BIOS upgrades stop being released after a few years.

    • @n.shiina8798
      @n.shiina8798 Год назад +15

      @@timramich for a general consumer, yes it's dead. for enterprise use, it could be beneficial since hardware RAID with write cache usually have battery backup

    • @astronemir
      @astronemir Год назад

      We run enterprise database without hardware raid. Unless 50% of drives out of -100 in all nodes go Kaput immediately, we wouldn’t even lose data, and we can always bring up from backup and re-ingest the missing data during the downtime to get back up and running.

  • @philb5593
    @philb5593 Год назад +1241

    I love videos with Alex and Linus. Linus loves to do things the janky way, and Alex has an engineering background, so Linus has a hope that Alec will do things the correct way, but then when Alex does thing the janky way like Linus isn't happy, but then things work out, and he's happy again.

    • @sellinganja
      @sellinganja Год назад +43

      Fake video Linus didn't drop it once

    • @revdarian
      @revdarian Год назад +13

      He dropped 21 ssd's on the desk at the start! Intentionally but it counts, right?

    • @timurtchanychev2174
      @timurtchanychev2174 Год назад

      Lllll

    • @cyrilio
      @cyrilio Год назад +5

      This comment was a rollercoaster ride. Completely correct though.

    • @philb5593
      @philb5593 Год назад

      @@revdarian Probably empty boxes

  • @timothyreeve4630
    @timothyreeve4630 Год назад +89

    For the VFX Editor (this may be a simplistic view)
    things like Baselight X where uncompressed video with native 8K raster EXRs are used outside of a Proxy workflow for finishing, the sheer size of uncompressed EXRs combined with high frame rate requests now, means your bottle neck is storage speed.
    There are bespoke Linux appliances that are used for review and feedback for workflows in VFX where high-speed RAID Cards are used for caching these frames.
    This would be ideal for this purpose

  • @mattp12
    @mattp12 Год назад +434

    I’m not so sure. A ram’s top speed is 20 mph. How could an SSD possibly outclass an animal of such swiftness?

    • @naoltitude9516
      @naoltitude9516 Год назад +17

      LOL

    • @giancarloraphaeldeguzman8613
      @giancarloraphaeldeguzman8613 Год назад +66

      A RAM has 395hp and 410lb-ft of torque.

    • @danielshimko7617
      @danielshimko7617 Год назад

      ​@@naoltitude9516😊😊

    • @TH3C001
      @TH3C001 Год назад +28

      These ram jokes are great, before I could Dodge the first one I saw a second one lol.

    • @ulrichkalber9039
      @ulrichkalber9039 Год назад +3

      Well... drop the SSD from 60000 feet, it will certainly go supersonic before it reaches thinner air.

  • @DisloyalGaming
    @DisloyalGaming Год назад +275

    Sabrent really going at it lately first with one of the best options for steam deck now searching out large silly projects to sponsor just to show how much progress they as a company has made is just insane. I swear haven't even heard of them till last year

    • @guspaz
      @guspaz Год назад +51

      From what I can tell (and some of this is speculation), they started out just selling just cheap generic adapters and usb hub type things, not being a player in the storage market at all, and then started selling some minimal effort generic reference design SSDs... Only, they chose the controller/reference design well so they were pretty decent SSDs at good prices, which managed to catapult them into the limelight, giving them the revenue to put some actual in-house design effort into the things, and now they're a major player.

    • @arkayngaming727
      @arkayngaming727 Год назад +15

      @@guspaz Your analysis is spot on: Sabrent entered the SSD market in 2018 and are now a reputable brand.

    • @arkayngaming727
      @arkayngaming727 Год назад +4

      Sabrent has been around since 1998 so they're definitely not a new company

    • @WarPigstheHun
      @WarPigstheHun Год назад

      They are the Samsung of 2017

    • @WarPigstheHun
      @WarPigstheHun Год назад +2

      ​@@arkayngaming727 yeah I bought my first Sabrent gen 3 in 2018 or 2020(?)

  • @rawexploiterp6951
    @rawexploiterp6951 Год назад +8

    my dad says: "if more data is stored in 1 point/place, then its scarier to lose that point."

    • @AnotherAustin-z7b
      @AnotherAustin-z7b 20 дней назад +1

      Imagine being a human that can only exist in one place and time and can't be replaced. Very scary!

  • @tassadarforaiur
    @tassadarforaiur Год назад +412

    Would love to see this thing loaded purely with Intel optane drives, i.e. 118gb p1600x ngff 2280 drives. It would only be ~2TB, of data, but imagine the iops.

    • @Yamagatabr
      @Yamagatabr Год назад +10

      wow, isn`t Optane dead??? I tough that mdern nvmes had surassed it

    • @NadeemAhmed-nv2br
      @NadeemAhmed-nv2br Год назад +58

      @@Yamagatabr they haven't

    • @bosermann4963
      @bosermann4963 Год назад +2

      @@NadeemAhmed-nv2br why did the project get killed off then?

    • @benjaminmudd2071
      @benjaminmudd2071 Год назад +73

      @@bosermann4963 Nobody bought it. Most consumers did not and still do not need Optane and it is a lot more expensive than normal m.2.

    • @visveshnarayan7964
      @visveshnarayan7964 Год назад +28

      They should get a bunch of P5800Xs and do it with them. Those drives are fast as hell.

  • @asdf232003
    @asdf232003 Год назад +180

    Linus is doing this for more than a decade, you would think his enthusiasm will be gone by now but no, it increased by a ton. This is why LTT is so cool!

    • @Sarcasshole
      @Sarcasshole Год назад +13

      To be fair this is a rapidly changing industry. It's kinda hard to get bored of it

    • @AvaaSlays_Swiftie
      @AvaaSlays_Swiftie Год назад +1

      @@Sarcasshole yep I was gonna say the same thing. If your in the field that requires messing with lots of different set ups you’ll never be bored. It’s quite amazing how much stuff changes per year.

    • @fynkozari9271
      @fynkozari9271 Год назад

      Why would u lose interest about something u are passionate about? I've been playing pc since the 90s, still gaming today, should I stop? I like women since I was a child, so I should change to men now as an adult? What kind of logic is that? I was passionate in science as a child, I should stop liking science now? By that logic a doctor loving their job should change their career to become a farmer, because they lost enthusiasm.

    • @kanokwanphuawongphat1021
      @kanokwanphuawongphat1021 Год назад

      0:47

  • @HedgehogY2K
    @HedgehogY2K Год назад +6

    5:10 I don't know why but I felt a chill down my spine of someone out there jaw dropping at this zoomed out moment.

  • @FeTiProductions
    @FeTiProductions Год назад +492

    I'm happy this has come up as a topic of conversation. I've been wanting someone to test SSD speed to say yesteryear RAM versions like DDR3 DDR2 etc. to see if SSDs using swap files are at parity with RAM

    • @SpruceMoose-iv8un
      @SpruceMoose-iv8un Год назад +41

      Imagine the day when SSD's become as fast as modern ram then ram can be deleted from a PC.

    • @dial2616
      @dial2616 Год назад +116

      @@SpruceMoose-iv8un you're not thinking big enough: you could open a game, unplug your computer, and then turn it back on and keep playing where you left off. Volatile memory holds us back from a lot of stuff.

    • @katrinabryce
      @katrinabryce Год назад +121

      Access times are about 10µs, whereas RAM is about 10ns, so about 1000 times slower. RAID improves data transfer rates but doesn't do anything for access time.

    • @Anti3D-0
      @Anti3D-0 Год назад +33

      Intel's 3D xpoint was the closest thing to a NAND and RAM hybrid, too bad it didn't take off

    • @AndrewTSq
      @AndrewTSq Год назад +5

      it would be cool if someone made some hardware that use lots of GDDR6X ram-modules as storage, and had some sort of batterybackup when the computer is off, to keep the data. (and then running them in Raid :D

  • @harisjaved1379
    @harisjaved1379 Год назад +122

    This is perfect for data centers! I would love something like this for computer vision models! But I am sure we need motherboards with a lot of pcie lanes, like on the thread ripper.

    • @nadtz
      @nadtz Год назад +16

      A lot easier with u.2 or EX.x in the datacenter for hot swap and cooling. And sapphire rapids/EPYC have a lot more PCIE lanes, I.E. it already exists. Not to take away from how cool this card is mind you.

    • @DFX2KX
      @DFX2KX Год назад

      @@nadtz Yeah, this card is rather niche. It strikes me as something you'd use when you needed the space but couldn't use a full rack mount for whatever reason.

    • @phuzz00
      @phuzz00 Год назад +2

      @@DFX2KX The other advantage of this card is that it takes completely standard PCIe drives, so you can buy them off the shelf, rather than having to pay enterprise prices for U.2 etc.
      Or I suppose if you couldn't get the budget to upgrade an older server, but it had a spare PCIe slot and you wanted to really spice up it's storage...

    • @runforestrunfpv4354
      @runforestrunfpv4354 Год назад

      ​​@@nadtz E1.L doubles a neat intruder protection.

    • @lescobrandon3949
      @lescobrandon3949 3 месяца назад

      Its definitely not good for enterprise use tbh. Those drives only have like 3800 tbw rating

  • @Sptn051
    @Sptn051 Год назад +181

    I could totally see this device being used in massive weather simulations where you need to store the values of the atmospheric conditions within individual blocks of data and need the fastest access possible to that data. Being able to store the entirety of the information contained within a storm on a single device would prove invaluable to meteorologists, especially Dr. Orr in Minnesota who's been simulating tornadoes in his supercomputer.

    • @roadchewerpe5759
      @roadchewerpe5759 Год назад +7

      I’m a met student starting in the fall and I was thinking similarly.

    • @hmcredfed1836
      @hmcredfed1836 Год назад

      noice

    • @davidgomez5116
      @davidgomez5116 11 месяцев назад

      I just want this for the small form factor...

  • @playeronthebeat
    @playeronthebeat Год назад +102

    I'm in love with this thing because of NAS storage stuff. Sure, this card is really expensive and might be overkill but having 21 SSDs gives you a lot in terms of fault tolerance (RAID5 or RAID6) while also consuming less space.
    I'm excited to see where this is going!

    • @jamescheddar4896
      @jamescheddar4896 Год назад +1

      supercloud, metaverse, digital convergence, whatever you wanna call it

    • @bosstowndynamics5488
      @bosstowndynamics5488 Год назад +1

      It's very cool and I would love to have one but it would be cheaper to use 4 way passive cards in Xeon W or Epyc (including an upgrade to Xeon W - the card alone is 3 grand) if you only wanted one card worth of SSDs, it would also give you better cooling since those cards have decent heatsinks

    • @mtrebtreboschi5722
      @mtrebtreboschi5722 Год назад +1

      You can even get 4tb crucial nvme drives at best buy for $200 right now, so i could totally see this being in a consumer media server. If a drive fails itd take like an hour to rebuild from parity instead of like a week with hdd

    • @playeronthebeat
      @playeronthebeat Год назад +1

      @mtrebtreboschi5722 still, for the price of one 4tb SSD, I could get roughly two 4TB HDDs or one 8TB HDD, which, in most cases, is more than enough.
      But yes, you're right: with the trend being that drives get bigger, faster, and cheaper per TB generally, it'll most likely be a thing of the near future.

    • @mtrebtreboschi5722
      @mtrebtreboschi5722 Год назад

      @@playeronthebeat yeah hdds are about at least 3x cheaper per terabyte than any high capacity ssd, seagate exos drives often go on sale for $14 per tb.
      SSDs have the advantage in size, power consumption, rebuild time, and read-only longevity, which will all mostly be negligible for most people, but I guess if somebody has the money, this is the way to do it

  • @the_ratmeister
    @the_ratmeister Год назад +32

    We finally found a use for the Radeon VII: A measuring stick

  • @ElFlacoCs
    @ElFlacoCs Год назад +45

    Bro could have 66,945,606 (66 million) copies of the original doom

  • @Neoxon619
    @Neoxon619 Год назад +398

    I was actually expecting something jank when Alex showed up, but just 21 SSDs on a PCIe port surprisingly tame. With that said, I wonder how much of the performance would be retained if this SSD was used on a PS5.

    • @ethancrawford197
      @ethancrawford197 Год назад +75

      Alex and no jank? I’m disappointed
      Edit: got to the cooling part. I am now satisfied.

    • @Simon_Denmark
      @Simon_Denmark Год назад +14

      How would you even connect it to PS5? It’s not like you can throw a riser there. Or do you mean If Sony used one?

    • @DLTX1007
      @DLTX1007 Год назад +23

      @@Simon_Denmark M.2 to PCIe risers do actually exist... you just need to figure out a way to power them

    • @notme222
      @notme222 Год назад +12

      I can tell you wrote that before they got to the cooling section. 😅

    • @AGRACUTA
      @AGRACUTA Год назад +5

      Think he meant just one of those Sabrent ssds not the whole pair card

  • @QualityDoggo
    @QualityDoggo Год назад +25

    the insane part is how much of a "great fit" and "bargain price" this could be considered for high-end enterprise and workstation systems working on big datasets. This density of performance and storage without "custom" drives is unrivaled

  • @username8644
    @username8644 Год назад +36

    Machine learning is definitely going to be a big use case. Even sample datasets we use, given for educational purposes and are not meant to be very challenging , are easily 500GB in size sometimes. And they relatively small datasets since they really do not have that many different variables. Now imagine having to handle literally petabytes of data.

  • @Foiliagegaming
    @Foiliagegaming Год назад +32

    You know this just makes sense having cards like this with chips that can handle all of this. I can not imagine how much this could help once pcie 5 really starts to come around in the data center. If they come up with a card to be able to double the amount of drives using pcie 3 with the same card effectively that would keep more out of landfills long term. Love this techy

    • @nadtz
      @nadtz Год назад +4

      You are much more likely to see ex.x or u.2 in datacenters. Imagine how much fun having to swap one of these drives out would be compared to walking over and replacing the drive sled with the blinking light.

    • @MostlyPennyCat
      @MostlyPennyCat Год назад +1

      ​@@nadtz
      That's just an engineering problem though, I could imagine seeing a drive sled that contains 4 mini sleds (I name them Slugs™️) each with their own sub blinky blink.

    • @nadtz
      @nadtz Год назад +2

      @@MostlyPennyCat Not exactly sure why when the solution already exists.

  • @RSBickAKaby
    @RSBickAKaby Год назад +95

    Thanks SABRENT for making this happen!

    • @Azmodaeus49
      @Azmodaeus49 Год назад +4

      They don't care about you 😂

    • @RSBickAKaby
      @RSBickAKaby Год назад +43

      @@Azmodaeus49 True but I got to watch a cool video for free so I'm not complaining

    • @ownedmaxer607
      @ownedmaxer607 Год назад +13

      @@Azmodaeus49 forgot to take your happy pills again?

    • @NonsensicalSpudz
      @NonsensicalSpudz Год назад +2

      @@Azmodaeus49 true in more ways than one, their customer service is bad apperently

  • @jakobpavli2646
    @jakobpavli2646 11 месяцев назад +5

    how to build worlds most expencive toaster :D

  • @JordansTechJunk
    @JordansTechJunk Год назад +119

    The X21 is more than just a one-trick pony. The M.2 slot supports a number of other devices. For instance, you could use wireless cards with M.2 A+E converter boards to set each wireless card to a specific wireless channel for spectrum monitoring. There’s even an AI angle here. A company called Axelera makes an M.2 AI Edge accelerator module that could be used by the Apex card at some point as well.

    • @puerlatinophilus3037
      @puerlatinophilus3037 Год назад +7

      Or the Coral TPU could also fit on this.
      *Slaps top of card*
      This bad boy can fit so much AI processing

    • @Versuffe
      @Versuffe Год назад +5

      I love how we can just mess with funny brain replicas for shits and giggles

    • @JordansTechJunk
      @JordansTechJunk Год назад +4

      @@puerlatinophilus3037 we have one of the other 2* and are looking at those accelerators in it right now.

    • @luminatrixfanfiction
      @luminatrixfanfiction Год назад

      @@JordansTechJunk Meanwhile, here I was just thinking of using 21 Asus pce-AC88 wireless cards to pci m.2 adapters and running all 21 wireless cards in parallel for a grand total of a theoretical internet speeds of 44.1 Gb/s upload and download. Do you think that's too much RF radiation? 😁

    • @jerryandersson4873
      @jerryandersson4873 Год назад

      ​@@luminatrixfanfiction I who know nothing really, could speculate about interference making it not work.
      Not good enough anyway.
      But then again, with the right hardware and software that may not be a thing to worry about, I am simply speculating.

  • @David_Quinn_Photography
    @David_Quinn_Photography Год назад +120

    its crazy to think just a few years ago Sabrent was the "try it if you want" brand I bought a 240gb drive probably 5 years ago and I was skeptical but damn they are on my list of good brands for M.2s

    • @Argedis
      @Argedis Год назад

      What are your thoughts on TeamGroup?

    • @TechProFury
      @TechProFury Год назад +2

      @Valor_X teamgroup is not bad. I think they are similar to silicon power, but still a little early i think.
      May be better.

    • @passmelers
      @passmelers Год назад

      my sabrent drives died within a year. WD is better.

    • @isaacbejjani5116
      @isaacbejjani5116 Год назад +1

      ​@@passmelers samsung is the best

    • @n.shiina8798
      @n.shiina8798 Год назад

      @@isaacbejjani5116 nah. hynix, micron and intel are the best at least from my opinion. idk about their consumer grade SSDs but some of Samsung's enterprise SSDs are just.. problematic. early PM863, PM883, PM1733 seems having issues with its firmware

  • @glassby5882
    @glassby5882 Год назад +3

    Imagine watching this video 10-20 years from now, and you can fit that amount of data in your pocket. Tech is so wild, I'll never get enough.

  • @ezrapierce1233
    @ezrapierce1233 Год назад +18

    13:49 there is something so hilariously silly and simple about Linus using cardboard in a cooling solution.

  • @testbenchdude
    @testbenchdude Год назад +27

    Man, I remember back in the day running two of the smaller Raptor drives (32gb?) in Raid 0 and just loving life. I may not have had the best graphics card or processor, but I loaded into BF2 faster than any of my friends for a while. What a time to be alive.

    • @BNEA02
      @BNEA02 Год назад +7

      Always first to GET TO THE CHOPPA, remember.

    • @hexacetat
      @hexacetat Год назад

      I was so proud of my raptor. Better days.

    • @sim2er
      @sim2er Год назад

      My raptor 600 is still going, but I use it as a temp/cache drive because I'm afraid it might reach EOL at any time

    • @Romess1
      @Romess1 Год назад

      My colleague had one and I remember my ignorant brain saying 14,7GB 15k SCSI drives in RAID are better. Never mind the 2K setup costs.

  • @GoldRaven-oe4by
    @GoldRaven-oe4by Год назад +7

    Imagine using this as vram

  • @runningmelon3257
    @runningmelon3257 Год назад +6

    Sabrent just asking for a shoutout and casually sending that many 8TB SSD over is a big W move! They truly support Linus craziness and we all benefit from it! :D

  • @davidawaters
    @davidawaters Год назад +42

    I could see this working well for engineering simulations (FEA, CFD) that are effectively limited by the amount of ram that you have. Before something like this, maxing out the ram and letting the solver use the hard drive would increase solve time by something like 10x.

    • @jeschinstad
      @jeschinstad Год назад +2

      But you would most likely use DCPMM for that because it has much lower latency. You could probably use this for some kind of custom swap in some cases, but at some point there's diminishing returns. The advantage of this card is that it's compact. There are systems that have lots and lots of u.2 connections and can support far greater capacity and speed.

    • @DigitalJedi
      @DigitalJedi Год назад +1

      The newest simulation machines where I work are for chip cooler simulations. They use CXL RAM as a layer between the SSDs and RAM. We also have sapphire Rapids HBM in dual-socket for it as well.

    • @radugrigoras
      @radugrigoras Год назад +1

      This would still be slower, 25Gb/s on ram is per channel, if you are running simulations that intense I would think you would have a very dece amount of ram and be running it in 8 channel. This is cheaper for sure but I don’t think the OS is designed to use swap like that, I think it still goes through ram first and constantly dumps back and forth from ram to disk increasing your CPU usage drastically. This would however be very useful in AI training, the datasets are massive and gpus nowadays especially if you have like 4 A6000s could eat up quite a few IOPs. I would like to see a water cooled version of this, maybe they will partner up with an ssd manufacturer and offer that later as a factory assembled package.

    • @jeschinstad
      @jeschinstad Год назад

      @@radugrigoras: There's nothing particularly special about this. It sacrifices speed for compactness is all. So it's great for a system with few PCIe lanes or systems that need to be small. There are very few situations where this is something you'd even care to have.

    • @davidawaters
      @davidawaters Год назад

      I haven’t done this kind of work since 2013 (finite difference simulations on a 192gb, dual 8 core Xeon machine) so I’m guessing things have changed and gotten better in that time span.

  • @nicknorthcutt7680
    @nicknorthcutt7680 Год назад +57

    Just imagine how insane the amount of storage you'd have with multiples of these! Its crazy that with only a couple of these and you're already at a petabyte, and in such a tiny amount of space. Crazy.

  • @Enclave.
    @Enclave. Год назад +50

    I always really love Alex's solutions, they're why I always get excited for a new video with him in it.

  • @77kgggdkp
    @77kgggdkp Год назад +2

    14:07 (insert meme) when your ssd is faster than your ram memory...

  • @SuperHaloman22
    @SuperHaloman22 Год назад +35

    Those chips are used quite a bit in NAS and storage backups in my enterprise environment. Dell Avamar and Isilon units will connect to switch fabrics with those chips. Very fast stuff!

    • @Thalanox
      @Thalanox Год назад

      I've been looking for some solutions for backing up my personal files and media collection. Can you recommend any resources that will help me do that? I have older stuff on old nearly-dead HDDs that have been removed from their systems, but that's clearly not a comfortable, current, or reliable storage method

    • @MaxC_1
      @MaxC_1 Год назад

      @@Thalanox why not just build a NAS using TrueNAS or just ZFS as a base. It's very well documented, easy and can be run on a old system. Just grab a bunch of used certified drives, and a old PC and swap the power supply for something decent and done pretty much. Set up a RAID array and put the files on it and call it a day. Drive fails? you're still covered

    • @BozesanVlad
      @BozesanVlad Год назад

      @@Thalanox electrons escape gates, so... keep using you backups on SSDs

    • @deepspacecow2644
      @deepspacecow2644 Год назад

      ​@@Thalanox If you want something enterprise grade get a dell r730. It's older, but will do pcie bifurcation and supports nvme

  • @Zarrx
    @Zarrx Год назад +56

    If you asked me 15 years ago where storage speeds could be I wouldn't have guessed this fast... It's hard for me to be super excited because I feel like the consumer application isn't really there... but the use case in cloud computing will be huge and we'll see it's affects in our services. It's interesting how obfuscated this technology is for general consumers even though we'll all see benefits.

    • @Argedis
      @Argedis Год назад +4

      I remember being impressed when early Sata 2 SSD's were breaking 200MB/sec read speeds and people would Raid 0 them for 400+MB/sec
      I'm still super impressed at my Gen 3 NVME 3,500MB/sec speeds... how far tech has come

    • @isaacbejjani5116
      @isaacbejjani5116 Год назад +2

      ​@@Argedis yea, Gen 4 drives are so fast nowadays that there's really no reason to raid 0 for most people

    • @johntoto5496
      @johntoto5496 Год назад +1

      Server motherboards already have a fak ton of ssd slots built-in. I built 2 servers like that 7 years ago with 10 Intel sata ssd (consumer grade) for my job. I don't remember the exact performance (probably 10 x sata speed) but it was insane for the relatively low cost. We put 40GBps network cards and a 40GBps switch between them and built one storage array spanning the two servers. It was a proof of concept for a cheap high availability hyper-v cluster. Good times.

    • @lolmao500
      @lolmao500 Год назад

      Yeah we need this in consumer hands. FFS I still have 2 mechanical drives in my computer and like 5+ HDD for storage because they have big capacities and cheap... even if way slower.

  • @uncommonsense360
    @uncommonsense360 8 месяцев назад +1

    18:30 The Anthony phone-a-friend lifeline is my favorite thing ever

  • @YounesLayachi
    @YounesLayachi Год назад +4

    12:25 55°C is apparently the sweet spot for nand flash, it doesn't like being colder or hotter
    Colder = increased data retention, write wear
    Hotter = increased hardware degradation

    • @pandozmc
      @pandozmc Год назад

      Anywhere I can read up on that?

    • @YounesLayachi
      @YounesLayachi Год назад

      @@pandozmc I think I first read this on a EKWB site but now I can't find it.
      After a bit of googling I found this anandtech article from 2015 back when SSD data retention periods was becoming a concern.
      I should note that my memory isn't quite accurate, so if you do look this up feel free to ignore what I said in previous comment.
      here's what anandtech says :
      _The conductivity of a semiconductor scales with temperature, which is bad news for NAND because when it's unpowered the electrons are not supposed to move as that would change the charge of the cell. In other words, as the temperature increases, the electrons escape the floating gate faster that ultimately changes the voltage state of the cell and renders data unreadable (i.e. the drive no longer retains data)._
      _For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation and causes less stress on the tunnel oxide, improving the endurance of the cell because endurance is practically limited by tunnel oxide's ability to hold the electrons inside the floating gate._

  • @LanceBryantGrigg
    @LanceBryantGrigg Год назад +12

    That is insane. As a database engineer I want one of these things.

  • @sonofsons770
    @sonofsons770 9 месяцев назад +1

    168 terabytes. My man is trying download the entire steam library

  • @DraggardArcane
    @DraggardArcane Год назад +4

    Oh Linus not the TOES! 2:25

  • @pascallaforge375
    @pascallaforge375 Год назад +11

    I could probaly see a card like that for bio-informatics use case. Wholegenome sequencing datasets can be multiple TB of data, a card like that would 100% speed up analysis of such masive amount of data

  • @LearningandTechnology
    @LearningandTechnology Год назад +7

    This would be great for a Virtual Computer center - I’m biased towards education - having Lab VMs available on demand across a LAN would be fantastic.
    I’m only part way through watching as I post this, but I would be very interested in IOPS on a new heavy access load.
    Aaaaand… you didn’t let me down! You did the IOPS. Writes in a RAID will always cause a performance hit (especially because you won’t be able to use single parity bit and will incur extra cost on >8TB - which this definitely is!), but if I was using this it would be for VMs and Data with a heavier READ profile and it would be pretty cool.

  • @JayzBOT
    @JayzBOT Год назад +35

    Clicked on this video so fast that even the SSD couldn't keep up 😅

  • @edwardkostreski6733
    @edwardkostreski6733 Год назад +13

    Linus: I will suck on your toes for this many drives.
    Sabrent: Send it!
    😂

    • @Andytlp
      @Andytlp Год назад +1

      Yvonne would get jealous no way jose

  • @twocows360
    @twocows360 Год назад

    16:15 if i remember correctly, when task manager says a disk is at 100%, that just means it was being actively written to 100% of the time since the last tick that it measured.

  • @lurick
    @lurick Год назад +11

    Now if only Optane was still around! :D

  • @Bob-nc5hz
    @Bob-nc5hz Год назад +61

    The AI thing does make sense especially when combined with DirectStorage-type tech.

    • @MrNoipe
      @MrNoipe Год назад

      it doesn't, because most datasets are precomputed and cached

    • @eadweard.
      @eadweard. Год назад +6

      ​@@MrNoipe How does being precomputed and cached negate Direct Storage benefits?

    • @ryanthompson3737
      @ryanthompson3737 Год назад

      ​@''/ad cached.... you mean stored on a massive bank of drives? Beyond that, there's still computing in order to mesh data together to create something new... this is where quick access storage helps. Being able to access multiple pieces of data quickly means you're only limited by computing resources, not storage resources which tends to be an issue.

    • @affegpus4195
      @affegpus4195 Год назад

      ​@@MrNoipe it makes sense when training the ai

  • @AaronJLong
    @AaronJLong Год назад +2

    Over 10 years ago, before discovering Netflix back when it was good, I would be asked by my family to, find, and host entire seasons of their favorite shows as well as some of their favorite movies. I used to spend ages burning custom DVDs, but we were in the future with a 1080 HDTV and our Xbox 360 could connect to my PC using windows media center turning it into an HD streaming home server on the side. This meant a lot of time downloading overnight, and my 1TB-ish hard drive with more storage than I thought I would ever need was starting to actually visibly fill up, as I also had many multi-gig games as well as rips of the disc images for easily swapping into an emulated drive without swapping/scratching physical media, meaning my physical games needed space for the install and ISO, as well as gigabytes of music in my "My Little Pony Fan Music/Remixes" folder alone. Basically it was a well-loved PC that I volunteered my extra space to turn into a windows media center HD video streaming server for my family on the side.
    Now Netflix doesn't have all of the good shows anymore, and it is so expensive and inconvenient to be able to stream all of the shows you like as you juggle tons of account credentials, and it is getting harder if not impossible to have an ad-free experience no matter how much money you throw at them.
    So what if I wanted the entire library of every good show and movie stored locally in HD, or even 4K as we just upgraded a few months back? I don't want TV myself but I doubt the streaming services or our ISP are letting us stream much if any 4K content despite us having the fastest internet package in our area and I purchased a 4k streaming capable router last time we needed one so we could be 4K ready when the time came.
    Well, acquiring thousands of hours of TV and movies in the best quality available is its own beast, but should I ever win the lottery at least I know there's a product that can theoretically work with a home machine, with a modular nature so extra storage can be added as needed instead of having to buy a larger drive and copy the files from the old one, and also doesn't require that you buy $30k+ worth of SSDs up front and just add more as needed. Basically, if you're willing to spend your time ripping a massive collection of 4K blu-rays and whatever else it takes to get your hands on every episode of every show everyone in your household loves, and the movies too, in the highest quality available, you can make your own media server with no ads or switching between 3 different services to watch every Star Trek series and movie. No "we can't watch this because my parents in the next state are on the account right now"
    It would be a huge money and time investment to get the card, drives, and all of the media, but it is a lot more technically feasible now to have your own HD/4K media center just by adding an extra card to a regular PC.

  • @Wayhoo
    @Wayhoo Год назад +15

    Thanks to you Linus, I'm building my first PC this week!

  • @rakeau
    @rakeau Год назад +6

    Would be nice to see a scaled down version of this, for those who want this kind of thing but without quite so much cost or overkill.

  • @rangefreewords
    @rangefreewords 7 месяцев назад +3

    That commercial chiller that you have for your lab should be used with this on a milled cooling block just for this application. See what you could do as far demos go. You might be able to see those numbers climb higher.

  • @ryansather8851
    @ryansather8851 Год назад +5

    honestly if this gets affordable for the average consumer, this could be an absolute game changer for any system and game developers

  • @PiotrekR-aka-Szpadel
    @PiotrekR-aka-Szpadel Год назад +7

    just to be clear, bios raid limitation does not matter for anyone in professional space.
    Raid is basically broken for anything serious - it can detect if drive is dead and then migrate to spare one (in case of something like raid5)
    but the issue is when drive is not yet dead, it's failing - in that case it might corrupt your data.
    that's why for any serious data storage you want to use something like ZFS so it's able to check checksums of your data and detect that something was corrupted on drive and fix it on the fly
    what would be also useful use case that such amount of iops? ZFS deduplication, that required to add another layer before accessing data -> you want to get data block X, then you need to read from deduplication map, and get read addres for data Y and then access that data. This basically halves amount of IOPS you can do (assuming no cache hits in memory) that's why it's not used that often.
    In this configuration you could get a much more effective storage when expecting duplicated data

    • @THE-X-Force
      @THE-X-Force Год назад +3

      I haven't built a computer in 20 years .. and I feel like I know absolutely nothing now.

    • @AlexeiDimitri
      @AlexeiDimitri Год назад

      Well, Linus guys fall for this shit some years ago... search on channel "Our data is GONE... Again - Petabyte Project Recovery"
      They Rely solely on ZFS to handle disks and they lose.

  • @ohyeah117
    @ohyeah117 Год назад +4

    Uhhh... I'm sorry did I just see a Leather LTT backpack?? @9:07

  • @ryozukojima
    @ryozukojima Год назад +6

    I love how they included that clip of shocking hardware to test if it can be killed while Linus was holding it with his thumb on the slot contacts. You just know someone had commented how he was going to zap the board.
    Edit: oh man, this just reminded me of something I had back in the IBM XT days. Back then I had a 20 mb hard drive built into an ISA card, aka a Hard Card.

  • @SpeedyMckeezy
    @SpeedyMckeezy Год назад +18

    Love it when people go "hey! that's a bottleneck" to setups they will never have and could never afford

  • @ChairmanMeow1
    @ChairmanMeow1 7 месяцев назад

    This was a year ago and Linus was right. These cards did catch on a little, and they work. I put another brands PCIe 4.0 card in an older computer, stuck 2 M.2 drives on it, and it works great. I cannot tell they're on a PCIe board and not on the mobo. Worked great!

  • @anthixious
    @anthixious Год назад +6

    Linus: "Under normal circumstances, you wouldn't do something dumb like configure 21 drives in a RAID0"
    Also Linus: Heh heh, you wanna do a RAID0?
    This product is bananas, I'm gonna have to watch this vid a few times more.

  • @ImSenzoku
    @ImSenzoku Год назад +10

    Me: *buys one SSD*
    Linus: *buys a pcie SSD card the size of a graphics card*

    • @Xenoray1
      @Xenoray1 Год назад

      cost: close to 30 nvidia rtx 4090

  • @zergidrom4572
    @zergidrom4572 Год назад +4

    1:34 im not expert, but these SSD bend is not looks healthy

  • @MikkoRantalainen
    @MikkoRantalainen Год назад +11

    Having two of those cards, each RAID-0 running as a RAID-1 would be pretty much perfect setup for a big database server. Combine those cards with say 1 TB of RAM and you can execute huge queries very rapidly against a 100 TB database.

    • @AmaroqStarwind
      @AmaroqStarwind Год назад +1

      You might need a lot of CPUs or GPUs as well, to actually move all of that data around.

    • @AmaroqStarwind
      @AmaroqStarwind Год назад

      Or perhaps those newfangled Data Processing Units, whatever those are.

    • @PuppetMasterdaath144
      @PuppetMasterdaath144 Год назад

      fart hehe

    • @exorr81
      @exorr81 Год назад +1

      That's called RAID-10 😂

    • @MikkoRantalainen
      @MikkoRantalainen Год назад +2

      @@exorr81 That's true. I explained that as a combination of RAID-0 and RAID-1 for two reasons:
      (1) RAID-0 and RAID-1 are easier to understand for most people than yet another RAID mode.
      (2) If you configure such setup as RAID-10 directly, changes are higher that you mess in a way that causes the whole RAID to be offline if one of those cards fails. (That is, you configure stripes and mirrors incorrectly over all 42 devices.)

  • @bkucenski
    @bkucenski Год назад +7

    I was going to get a 2nd Synology and put 8TB SSDs into it. But it looks like m.2 storage is being taken seriously as an option for RAID. I'd much rather go with NVMe if it can be a comparable price.

  • @BrunoSR1985
    @BrunoSR1985 7 месяцев назад

    You guys are doing the kind of thing that I always wanted, but never had the money

  • @lilkidsuave
    @lilkidsuave Год назад +4

    Linus's face when Alex mentioned chia mining
    😂😂😂

  • @Torbjorn.Lindgren
    @Torbjorn.Lindgren Год назад +7

    The primary use for these PCIe switches are actually in servers - when there's lots of SSDs in a server backplane they're sometimes connected directly, but often there's one or more big PCI-e switches driving the slots. And each switch will have a 16x or 32x (yes, this is a thing even if the standard only lists up to 16x) PCIe 4.0 or 5.0 uplink. This is the real reason why PCIe link speeds are again trending upwards quickly after a long hiatus (between 3.0 and 4.0), there's finally a user (servers) that are willing to pay for development (there's not enough entuast to pay for it by orders of magnitude).
    And I assume the reason the card doesn't have any fans is because they expect it to end up in servers where there's a large amount of airflow provided by the server itself (sometimes capable of cooling 400W GPU or AI accelerator with just the servers airflow, no fan on the card.
    It's also unfortunately why PCIe switches disappeared from consumer motherboards (many early SLI motherboards had them), the PCIe switch vendors jacked up the prices by 10x because most were sold to server vendors and they were willing to pay.
    "Better" (IE expensive) server backplane often accept PCIe (U.2 or U.3, both 4x PCIe), SAS (12Gbps) or SATA (6Gbps) in all slots by routing each slot to either to the PCIe switch(es) or the SAS switch(es). Yes, SAS switches offers all the same features (fabric, multiple servers, dynamic load allocation to the "uplink" and so on) to both SAS and SATA disks, they've been in server backplanes for a long time (long before PCIe SSDs were a thing, never mind M.2). It does makes me wonder if some of the PCIe switch chips also does SAS, that would reduce chipcount and make routing far simpler which means they can ask the server builders for more money...

    • @bosstowndynamics5488
      @bosstowndynamics5488 Год назад

      As far as I can tell PCIe has been getting faster at a pretty consistent rate for many years, it's just Intel was a laggard in going to PCIe 4 so gen 3 was around longer and by the time they were on gen 4 we were already close to gen 5. If you look at it in terms of AMD instead gen 4 was around for plenty of time.

    • @Torbjorn.Lindgren
      @Torbjorn.Lindgren Год назад

      @@bosstowndynamics5488 No, 4.0 was seriously late. The official introduction years is 2003, 2007, 2010, 2017, 2019, 2022, (2025 planned) for 1.0 to 7.0 (check official graphs or the Wikipedia article). Note how they all are around 3 year, except one that took 7 year - that's the 3.0 to 4.0 transition and it's a big outlier.

  • @lordcorgi6481
    @lordcorgi6481 Год назад +16

    I had thought of this when SSDs first came out. I thought to myself, what happens if storage drives become so fast that they make RAM obsolete. Being able to just read and write information straight off a storage drive has to be faster than going through a RAM middleman.

    • @andrewcore7672
      @andrewcore7672 Год назад +1

      Well it could be done, however you would need to do "SSDRAM" changes fast as write is the death of the ssd. Depending on workload etc i think in a year you would need to swap it.

    • @sune9578
      @sune9578 Год назад +5

      The reason why we don't have SSDs replace RAM is that it's a little more than just raw read/write speed. Those reasons are latency and longevity, both of which are affected by how data is written to SSDs.
      With that said, I'm sure there are highly specialized computers out there that don't have RAM and just use HDDs/SSDs, but they certainly aren't common to consumers.

    • @username8644
      @username8644 Год назад +2

      That's not how it works. The closer you are to the cpu die the better. That's why cpu cache exists, which is essentially "ram" that is located as close to the die as possible. For example, the time to reference L1 cache is 0.5ns. The time it takes to reference RAM is 100ns. That is an absolutely massive difference. You can't only look at raw speeds, you need to factor latency. That's why lower memory latency improves memory performance even if raw speeds remain the same. Often a decrease in latency is much more significant than raw speed.

  • @benitollan
    @benitollan Год назад +9

    2:25 "I would suck on your toes for this many drives" don't drop yourself like that Linus 🤣🤣
    PS: these jokes come exclusively from love and admiration -please don't drop me from your channel- .

  • @8-bitcentral31
    @8-bitcentral31 Год назад +16

    Finally a solution to the 2GB DDR2 ram dilema. (you can only have max 4GB on a 2 slot moteboard)

    • @jeschinstad
      @jeschinstad Год назад

      DDR2 is still much faster because it has much lower latency.

    • @hubertnnn
      @hubertnnn Год назад

      @@marcogenovesi8570 Wait, why is anyone using DDR2?
      Does DDR2 have any advantage over DDR4/DDR5 that I am missing?

    • @jeschinstad
      @jeschinstad Год назад

      @@hubertnnn: I have some old systems running with DDR2. Work just fine. :)

    • @hubertnnn
      @hubertnnn Год назад

      @@marcogenovesi8570 Ok, I thought by "more recent" you meant, that they made a new DDR2 last year

  • @Ikxi
    @Ikxi Год назад

    12:00 I LOVE THAT BOARD

  • @benniwd2077
    @benniwd2077 Год назад +6

    A PCI 5 Version would be very nice

  • @andreas7944
    @andreas7944 Год назад +6

    @SABRENT I bought one of your SSDs recently. I am usually brand focused and I did not know your brand before I saw several LTT videos with them. I just wanted to let you know, that your sponsoring here actually works :)

  • @yeesaurus2669
    @yeesaurus2669 Год назад +1

    21 SSD's?
    21, can you do sum for me
    Can you hit a lil rich flex for me

  • @Guru_1092
    @Guru_1092 Год назад +6

    Probably one of the more interesting videos as of late. Clearly I need to do more research on SSD tech, PCIe, and RAID. This kind of went over my head at points.

  • @Nerex7
    @Nerex7 Год назад +19

    I'd be hyped if there was a budget version of this. You can get a budget 1tb ssd for ~40€ currently here in Germany, the only issue is I only got so many m.2 slots. This thing would solve that problem.

    • @laboulesdebleu8335
      @laboulesdebleu8335 Год назад

      try this: ‎B07T3RMFFT -- should do the trick

    • @MostlyPennyCat
      @MostlyPennyCat Год назад +4

      Turbo budget version, it's just a pcie to four m2 slots card. No logic, just wiring

    • @MostlyPennyCat
      @MostlyPennyCat Год назад +3

      Yep, just looked, they're already on eBay for about £25

    • @MostlyPennyCat
      @MostlyPennyCat Год назад +3

      About £50 for a 1tb crucial nvme M2, are those considered budget drives or middling?

    • @tab8k
      @tab8k Год назад +2

      Those ebay cards need the mobo to support bifurcation, so not the same thing as this Apex card that foes not.

  • @erockvaughn2190
    @erockvaughn2190 Год назад +3

    First thing I thought of was for video editing. A feature film at 8k raw can take up massive drive space and if you collaborate on a project this would be ideal. At about 8 tb's per hour of raw footage this would be ideal.

    • @juhajuntunen7866
      @juhajuntunen7866 Год назад +1

      Some 30k USD in movie budget is minimal investment. And it still exist when film is done.

  • @DmnkRocks
    @DmnkRocks Год назад +6

    On the Topic of Hot-Swap M.2: This feature must be supported by the Controller AND the SSD - and the Crucial P3 in the test does not support that.
    Through my personal testing, I've found not all hot swap feature on M.2 is the same - its basically a hit-or miss (for me at least)

    • @eDoc2020
      @eDoc2020 Год назад

      I thought by the nature of its design all PCIe device chips need to support hotswap. The hard part is power switching and mechanical considerations which make m.2 unsuitable.

  • @giusn
    @giusn Год назад +4

    Well I literally just attached 8 NVMe 2TB drives to my ITX server. I bifurcated the x16 gen3 slot into two x8 slots using bifurcation. In each slot I placed a PEX 8747 board carrying 4 NVMe cards each.
    Prices of 2TB drives really went down recently. They finally are under $100.

    • @Neopopulist
      @Neopopulist Год назад

      Where can you get a 2tb nvme for under $100, what brand if you don't mind me asking?

    • @giusn
      @giusn Год назад

      @@Neopopulist my comment got deleted. In summary, Silicon Power.

  • @larsjrgensen5975
    @larsjrgensen5975 Год назад +1

    It sound like it dynamically routes pci to where it is needed, It is super cool that it is possible to run any 4 drives at full pci speed or 8 cheaper slower drives at their slower speeds at the same time.
    Onboard raid controller could be perfect, so that the pci bandwidth only needs to transfer to the drive controller and the drive controller does the mirror instead of using pci bandwidth to mirror.

  • @PXAbstraction
    @PXAbstraction Год назад +3

    2:17 Undisclosed sponsorship.

  • @JaceFlury
    @JaceFlury Год назад +1

    5:49 Me choosing the video at night

  • @danhenton1978
    @danhenton1978 Год назад +30

    Great video, would love to see more affordable non-Bifurcation options like the highpoint for comparison.
    Also thanks for the linux testing as well! Some of us don't use windows 🤣

  • @GameCyborgCh
    @GameCyborgCh 10 месяцев назад

    you know it's fast when Blu-Rays/second is a legit unit of measurement for the bandwidth

  • @Hortifox_the_gardener
    @Hortifox_the_gardener Год назад +4

    I want to see it with 21 Optane modules. Not for the balls to the walls throughput but the IOPS, low cue depth power and latency.

  • @MarkBarrett
    @MarkBarrett Год назад +5

    I think 4 M.2 drives in a single 16x slot is the sweet spot.

    • @MarkBarrett
      @MarkBarrett Год назад +1

      Maybe 5 drives with a raid 5 controller.

  • @Nathan15038
    @Nathan15038 Год назад +1

    20:48 As a crypto miner and a Chia Farmer he read my mine, I swear the whole time I was calculating how much plots i could create at once

  • @Atsumari
    @Atsumari Год назад +18

    I think this could also be used for high output high input AI data processing.

    • @MostlyPennyCat
      @MostlyPennyCat Год назад

      Yup.
      Especially with the new GPU direct read/write stuff, I forget what it's called.

    • @vasiovasio
      @vasiovasio Год назад

      Yes, in many ML applications :)

  • @mcxgaming7341
    @mcxgaming7341 Год назад +4

    me in the back when my friend is talking too much 8:18

  • @MinecrafterPictures
    @MinecrafterPictures Год назад +1

    If I were to have like a private Emby media server with a bunch of these ssd cards I could fit thousands of full shows.
    And I mean all episodes of all those shows, from start to finish, and still have storage for all of my games. Sheesh that's a lot!

  • @infin1ty850
    @infin1ty850 Год назад +27

    I bought a 1TB Sabrent for my Steam Deck, and it has been great. I'm curiously about why the help the brand has exploded in what seems like less than a month.
    If the card ends up maintaining its functionality for the advertised lifespan, I will be extremely happy. I just get the "too good to be true" vibe from their products.

    • @OdisHarkins
      @OdisHarkins Год назад

      Still running my Rocket 3 years later, they make a nice drive!

    • @infin1ty850
      @infin1ty850 Год назад +1

      @@OdisHarkins when it comes to data storage very rarely gone with any company that wasn't a "household" in the US.
      I am still very impressed by the performance I've been getting out of the card so far.

    • @xamindar
      @xamindar Год назад +1

      It's not like their drives are the cheapest out there or anything. So no reason they would be "to good to be true".

    • @bosstowndynamics5488
      @bosstowndynamics5488 Год назад +1

      ​@@xamindar Plus they're pretty much just a Phison controller paired with name brand NAND. Hard to make a bad drive when all the hard parts are outsourced to experienced third parties.

  • @phizc
    @phizc Год назад +4

    It's faster than _a_ stick of 3200MT RAM. Not 2 sticks as you would use in dual channel.
    As for SLC cache, the Rocket 4 8TB has around 880GB of it per drive, so yeah, if you have 4 drives it'll slow down after you've written 3.5TB, but that's an awful lot of data to need to write at ~24GB/s.
    That thing still is insanely fast though.

  • @jacob_90s
    @jacob_90s Год назад

    13:30. There are brackets you can buy that let you attach pc fans to them to blow on cards in pci slot. They would actuallyvwork really well for this particular set up

  • @angeldelvax7219
    @angeldelvax7219 Год назад +4

    I would LOVE to have one of those in my dev computer. Imagine the gain! Developing 3D games / VR / AR applications can take up a LOT of storage space. And compiling / rendering needs to read / write as much data as the CPU / GPU can handle. Add project management / version control, and both capacity and speed of a card like this become VERY interesting indeed!
    Of course I can't afford even a basic dev system (not even 4.5K euro), let alone add this card XD

    • @3dchick
      @3dchick Год назад +1

      I know! It makes drool. I caught myself considering selling my car, 😂

  • @atruceforbruce5388
    @atruceforbruce5388 11 месяцев назад

    Guess that beats what is currently on the market even 6 months later. This product is still on pre order.

  • @raakinguraisy2451
    @raakinguraisy2451 Год назад

    sabrent is absolutely insane for sending this madman these ssd for free

  • @BD-8171
    @BD-8171 11 месяцев назад +5

    couldn´t you use an integrated GPU and then use the leftover slot to use 42 ssd´s total?