8TB RAID 0 Guide : ASUS Hyper M.2 V2 - You will not believe the speeds on this NVMe storage adapter!

Поделиться
HTML-код
  • Опубликовано: 5 сен 2024

Комментарии • 32

  • @mbe102
    @mbe102 10 месяцев назад +4

    Already have something similar in my amazon cart for my newly arrived HP Z440! Its a sabrent one, PCI-E 4.0 (with backwards compat to 3.0). Pretty stoked to get it and give it a thorough try! Awesome timing on the video haha.
    The ASUS card seems really well built for power delivery!
    OH! Stoked for the Z440 vid! I plan on swapping to a Fractal Design Define R3 (after I get a new case for my current parts), I remember your post on reddit about the front i/o. VERY EXCITING!!!!
    Are you going to use Primo Cache with your NVME+IWP 16TB?

    • @racerrrz
      @racerrrz  10 месяцев назад

      Great choice! The Z440 will be able to support the ASUS Hyper without any dramas. I am still amazed at how well this adapter maintains low NVMe temperatures. It performed only slightly worse than the HP Turbo Drive Quad Pro (which was the best on thermal performance of all the quad PCIe adapters I have tested), and yet the Asus M.2 V2 is a fraction of the price!
      If you wanted an adapter that is a little more future proof go for the ASUS Hyper M.2 Gen 4 (it's PCIe 4.0), and I trust it will perform just as well. I love how NVMe prices are slowly dropping. Great job on the Sabrent NVMe, they are high spec items.

  • @shadowr2d2
    @shadowr2d2 8 месяцев назад +2

    I love ❤️ your video 🎥 style. It’s like watching 👀. A sporting event 🥋🏂🥇. I have one ☝️ question.? Can you change the fan size. Maybe put a larger fan. Like a *(Noctua NF-A4x10FLX) small PC fan. Even though I was thinking 🤔 about. The Liam-Li LCD fan. Nothing better than a little RGB.

    • @racerrrz
      @racerrrz  8 месяцев назад +1

      Thank you, cinematic viewing all the way.
      Yes and no for the fan change.
      The challenge would be the size of the fan, particularly given the low clearance. Looking at the NF-A4x10FXL I would say no, it would be around double the depth that would fit. That doesn't mean it can't work, with some metal work you could remove metal to get enough clearance for the fan to protrude through the metal casing, but the fan's concept is similar to laptop fans or console fans in that it pushes air over the heatsink rather than pulling air through it like PC case fans would. The lack of outer fan shrouding would be important then also.
      It would be a really inefficient fan as it is, but any flow is better than no flow.

    • @shadowr2d2
      @shadowr2d2 8 месяцев назад +1

      @@racerrrz - Thank you for answering my question 🙋‍♂️.? I have always wondered why.? They never made one. With a bigger fan on it. Why no one has ever. Mod one with a bigger fan, & RGB on it as well. But now I know. It pushes air over the heat sinks. Not pulls air like in a PC. *(Knowing is half the battle. - G.I. Joe).

  • @MM-vl8ic
    @MM-vl8ic 10 месяцев назад +1

    I might be missing something....but.... To test file transfer you need a known faster "source" drive ...... I use a 100+ GB "RamDisk" for my source files when testing and a 50+ GB video file with 10 GB of tiny crap files thrown in.......

    • @racerrrz
      @racerrrz  10 месяцев назад

      Hi. I would agree, to properly test file transfer you would need to set up the right conditions for a fair test. This was a "lazy test" at best, but I think it's reflective of what you could expect for your actual file transfers. Most individuals do not have RAMDisks setup on their modules, but that would make for an interesting test. My file transfer was from Non-RAID NVMe drive to the Asus Adapter (with or without RAID 0). Speeds tend to be limited by the slowest hardware in the system, so your mileage may vary on actual transfer speeds with this adapter.
      For your RAMDisk, how are you utilizing the speed in day-to-day operations? I haven't read into them in-detail, but its mostly because I do not currently appreciate their end-use (I need long term highly stable data storage). Given that the NIC tends to be the speed limit, and that NVMes deliver high speeds, I do not see the benefit in having a RAMDisk (ultra fast but volatile memory), unless you have a few TB of RAM allocated to the RAMDisk, now that is something I would find of benefit for quick transfers - but even then the speed returning to conventional storage will be a speed restriction.

  • @yeoinaru
    @yeoinaru 5 месяцев назад +1

    Thank you for the video! This is exactly my scenario as I have several old HP 640 and 840 that I can still squeeze performance out of. PS: won't you get better performance if you did a hardware RAID instead of a software RAID in Windows?

    • @racerrrz
      @racerrrz  5 месяцев назад

      I am glad you found it helpful. If you haven't already, check out my video on the HP Z Turbo Drive Quad Pro - it works really well in the Z440 / Z640 / Z840 - if you can source one on the cheap anyhow. I have one in my Z440 case swap for some TrueNAS caching.
      Hardware RAID would likely give higher speeds but to make the system a bit more lenient towards system upgrades I kept the Windows / software RAID. I upgraded from the Z840 to the Z8 G4 and I could simply slot the Asus Hyper V2 in without need for changes. But that would make for a cool video - software vs hardware RAID. I did a test in Windows storage spaces and that seemed to work well enough.

  • @gornostai4ik_lol
    @gornostai4ik_lol 9 месяцев назад +1

    Ускорить старые nvme за 5к руб - вариант
    WD sn850x быстрее по Q32t1 будет : примерно 950 мб/сек, а не 280, как в видео.
    Рациональные новые nvme купить. И ресурс диска больше и скорость.

    • @racerrrz
      @racerrrz  8 месяцев назад

      Nice, the W/D SN850x seems to be a solid NVMe for those speeds. The Samsung 980's tend to be cheap for the performance on offer, but they are also limited to PCIe 3.0. Unfortunately my Z840 is limited to PCIe 3.0 so I can't make use of the extra speed on the SN850x on my system. But any modern motherboard would greatly benefit from the extra speeds.

  • @monikaw1179
    @monikaw1179 10 месяцев назад +2

    You keep saying 'PCI Lane' when you mean slot. The x16 slot is split into 4 x 4 lanes using bifurcation. Otherwise, great video.

    • @racerrrz
      @racerrrz  10 месяцев назад +3

      Hi. Thank you, I am glad you found the benefit of use. I am sorry to hear the lane reference got to you, I used it heaps which would have been frustrating for you! Rechecking it- there were places where I said "x16 lane" where I should have said x16 Slot, good spotting!
      I use "lane" by specific intent, not through error, because referring to a x16 mechanical slot isn't sufficient for operation of these adapters, and it can actually be misleading in doing so.
      Let me explain:
      e.g. The HP Z240 has a x16 PCIe slot (Slot 4) that only has x4 electrical connectivity. Despite the deceptive x16 slot appearance, it will not be able to operate this adapter for more than one NVMe (despite the adapter's x16 slot size matching up with the x16 connector on the ASUS Hyper M.2 V2). Hence, I refer to "x16 lanes" to more specifically address that the PCIe slot that you select must have x16 electrical lane connectivity for this adapter to function as intended, as well as the BIOS PCIe bifurcation setting (so x4x4x4x4 or x8x8 lane bifurcation).
      The discrepancy between mechanical slot size and electrical lane connectivity isn't that common, and manufacturers like HP usually designate this difference through including reference to the lane allocation in brackets. e.g. PCIe3.0 x16 (4), meaning PCIe 3.0 encoding, x16 mechanical size and 4 electrical lanes connected.
      I outlined all this in more detail in a previous video in case any one finds that of benefit: ruclips.net/video/PRMWIpscDCg/видео.html

  • @zechsoner
    @zechsoner 6 месяцев назад +1

    Good content fr

    • @racerrrz
      @racerrrz  6 месяцев назад +1

      Thanks, I am glad you found it useful. The Asus Hyper is still giving me good service several months later. Very satisfied with its performance.

  • @FamilyTuned
    @FamilyTuned 5 месяцев назад +1

    Do you think this would work with crucial t705 ssd’s on a gigabyte mz33-ar0 board and amd epyc 9374f

    • @racerrrz
      @racerrrz  5 месяцев назад

      That's a solid selection of hardware. From what I could trace the EPYC 4th gen processors do appear to support bifurcation (I would be surprised if they didn't) - and that's really the only thing that would prevent the Asus Hyper M.2 adapter from working as intended. (reference: www.amd.com/system/files/documents/4th-gen-epyc-processor-architecture-white-paper.pdf )
      Given that the Crucial T705's are Gen 5, I would expect their performance to be bottlenecked by the PCIe interface that they are paired with. The Asus Hyper M.2 (slightly longer than the one in this video) is Gen 4, the Asus Hyper M.2 V2 (in this video) is Gen 3. Gen 3 would be limited to ~3500MB/s (8GT/s) per drive. Gen 4 would be limited to ~7000MB/s (16GT/s). Gen 5 would be limited to ~14000MB/s (32GT/s). However, these speeds will vary pending the hardware present on a given system.
      Placing 14000MB/s NVMes into the Gen 3 Asus Hyper M.2 V2 should net RAID 0 speeds of ~14000 - 16000MB/s, but if you get the Asus Hyper M.2 (Gen 4) you could theoretically double that speed to 28000MB/s - 32000MB/s. The most ideal solution would be to get a Gen 5 PCIe adapter like: www.gigabyte.com/SSD/AORUS-Gen5-AIC-Adaptor#kf I can't seem to find any videos of a Gen 5 NVMe based RAID 0 setup - but I am sure it exists out there somewhere.

    • @FamilyTuned
      @FamilyTuned 5 месяцев назад +1

      @@racerrrz thanks for the run down and tip! Much appreciated

    • @FamilyTuned
      @FamilyTuned 5 месяцев назад +1

      @@racerrrz I’m guessing the gigabyte would work better with the MB just based on being the same manufacturer

    • @racerrrz
      @racerrrz  5 месяцев назад

      @@FamilyTuned I would believe that also. The Gen 5 adapter doesn't seem to be that abundant (I am not 100% sure on when it released but likely it was 2022), but I traced one on Amazon: amzn.to/3vJn5jj (affiliate link) or Amazon search: "Aorus gen 5 adapter AIC".
      If you get one it would be cool to hear what speeds you end up getting. Their prices are decent actually, the older Aorus Gen 4 AIC card sells for more! I am yet to see any videos on the Gen 5 adapter but I would expect fast speeds on a modern system.

    • @racerrrz
      @racerrrz  5 месяцев назад

      @@FamilyTuned No trouble. Your setup would make for a really potent system - what are your end-use plans for it?

  • @user-vh3vt9mn4i
    @user-vh3vt9mn4i 4 месяца назад +1

    Can I have RAID 0 on it?

    • @racerrrz
      @racerrrz  4 месяца назад

      Hi, yes you can and it really helps to boost the speeds of these NVMes. I provided a full guide on how to setup RAID 0 in the video and I included the Read/Write speeds: 15:47.
      I also did another video that compared 4 different quad adapters with 4x NVMes in RAID 0 here: ruclips.net/video/xqg0uQ93KTg/видео.html

  • @shephusted2714
    @shephusted2714 9 месяцев назад +1

    pita to watch - i am on tiktok

    • @racerrrz
      @racerrrz  9 месяцев назад

      It's never easy to strike the perfect viewing experience for everyone - and it sounds like your experience was less than ideal. If you are willing to share feedback on what you disliked I will take that into consideration for future videos.

  • @TheRealD00berZ
    @TheRealD00berZ 10 месяцев назад +1

    Weeee

    • @racerrrz
      @racerrrz  10 месяцев назад

      Indeed. I am editing the HP Z Turbo Drive's Video footage as we speak! Hopefully I can fast track it since it was filmed in May...lol

  • @thisiswaytoocomplicated
    @thisiswaytoocomplicated 8 месяцев назад

    That's just completely dated rubbish. Is this video from >5 years ago? (16TB Iron Wolfe - yes that must be at least 5 years old)
    Also cooling design is years behind. More modern cards have dedicated coolers on the NVMes and a fan then cooling those coolers.
    Get at least a PCIe 4.0 x 16 card so you can at least run non-ancient NVMes at speed.
    4x7GB/s in parallel is really nothing fancy anymore - if you have enough lanes. And that HPZ840 is really an antique. Just guessing: 2015 or somewhere around that time? Any cheap current AMD will run circles around it.
    But this is really only yesterday's tech.
    And RAID 0 is of course never a good idea and never was -- completely independent of how old it is.
    But nice antique show.
    😉

    • @racerrrz
      @racerrrz  8 месяцев назад

      Hi there. Thank you for the input. Was it that obvious that I planned to take you back in time? Back to a time when RAID 0 on PCIe 3.0 x16 was still cool and "large HDDs" were unaffordable; that sounds like 2017 to me haha.
      The Asus Hyper V2 is a current spec NVMe adapter (designed specifically for PCIe 3.0 systems - like the dated HP Z840 workstation), as are the 1TB Samsung 980 NVMes (these are PCIe 3.0 and not that new). The Iron Wolf Pro 16TB was first released in 2019 but I can't foresee why anyone would buy a high capacity HDD when they first release - the cost per TB would be less than economic.
      It sounds like you prefer more modern desktop hardware with say a Ryzen 7 CPU with a X570 and glorious PCIe 5.0, and some Sabrent Rockets. Ironically those systems also benefit from RAID 0 on the Asus Hyper M.2 Quad adapter and the 16TB Iron Wolf Pro would be a fantastic storage upgrade for those systems. So I fail to see how this video tutorial doesn't benefit individuals with more modern systems. All you need to do is spec. the hardware you buy to suit your system.
      I'll add the Z840's best CPU combo (dual E5-2699A V4's; 44 C /88 T) sits around position 250 in the PassMark CPU Bencmark list (not far from a AMD Ryzen 9 5950X) for multicore performance - with AMD's current gen CPUs being top of the list obviously, and single core performance being much more in favor of desktop CPUs: www.cpubenchmark.net/CPU_mega_page.html# .
      Not bad for 2014 hardware lol. If you hunt around you could have either a AMD Ryzen 9 5950X or two Xeon E5-2699 V4's for the same money, the snag is the hardware to run the Ryzen costs much more than the old Z840.
      You'll find the video below is more in line with modern day hardware - but I will add I obtained nearly the same RAID 0 speeds on my aged Z840 with PCIe 3.0 compared with the modern day hardware tested here by another channel (Asus Hyper M.2 with Samsung 990 Pros):
      ruclips.net/video/c-vMzeQgd5k/видео.html
      While on the topic of new and modern, I quite enjoyed Linus's video on the 21x Sabrent Rockets fitted to a Apex Storage X21 : ruclips.net/video/IFwFDZDQWAA/видео.html
      And you may enjoy the Z840's younger brother:
      ruclips.net/video/ESW3gXUo1QM/видео.html
      What would we do without Linus covering the hardware we can't afford lol.
      On a plus side, I have upgraded my Z840 to a HP Z8 G4 and this same adapter netted up to 12000 MB/s Read and 10000MB/s Write speeds, but on average, its lower than that with the drives full.

    • @thisiswaytoocomplicated
      @thisiswaytoocomplicated 8 месяцев назад +1

      @@racerrrz Thanks for your reply! I really failed a bit to recognize this as sort of back-in-time video since it only came up somehow in my feed and I was unaware of any context.
      I'm not one of the persons who always needs to have the newest hardware. In fact my last machine which I only replaced in spring this year was an old Dell Precision with a mid-level Xeon from 2015.
      That BTW had a 4xNVMe PCIe 3.0 OEM adapter of likely better quality than the adapter in the video.
      I tend to use my machines for as long as possible and only replace them when they really become inadequate for the job at hand. Just a question of sustainability.
      But really one thing I learned - using Raid since about the very early 90's in each and every one of my machines: Never use Raid0. Or at least only use it if you have really top-notch backups.
      I regard it as sort of a learning process. People usually start with Raid0 because the gain in speed is so impressive. After they got hurt they usually switch to Raid5.
      And then comes some point after some years where people learn that for the most cases they really want Raid6. ;-)
      As said, I only tend to upgrade when it is really necessary and then I tend to size it to last for at least another 5+ years according to my needs.
      So right now my current set-up is kind of massive overkill. Ryzen Pro 5975WX 32 Cores (ranked currently 28th position with PassMark), 512 GB of ECC RAM, 8 x 2TB Samsung 990 Pro, 2 x Exos X18. Those 8 NVMes are mirrored in pairs and running with zfs. Sort of Raid10. After all those years with computers and Raid I really came around to prefer reliability a long time ago.
      As Raid0 those NVMEs could deliver >50GB/s easily. But that would really make no sense and something I leave to the current generation of script kiddies. I grew out of that already a long time ago. ;-)
      Last time I used Raid0 was with 5 IBM DCAS 4TB SCSI drives around the mid-90s.
      Was nice at the time and made somewhat sense - but today I absolutely see no reason to take the risk. Drives are now really big enough so that you don't need to compromise anymore for most cases.
      My little home server made mostly from recycled parts is currently for example running 8 Exos X18 for mass storage. RAID-Z2 (similar to Raid6) of course. Plenty of capacity and sequential read/write performance is around 1GB/s.
      Which is really good enough for me with rotating rust. 10 GbE ethernet is already a good choice for something like that.
      But really: Don't use Raid0 -- that is a typical rookie choice. On the upside - you can only grow and learn from that choice ;-)

    • @racerrrz
      @racerrrz  8 месяцев назад +1

      @@thisiswaytoocomplicated Arguably all my videos are trips back-in-time because I mostly work with hardware that suits my current well-aged machine. But in some way all of it is still trending right now because the current generation of tech. still seeks performance, even if it means using old methods to get it (like RAID 0).
      The Asus Hyper, ironically, was the best overall performing adapter in my testing: ruclips.net/video/xqg0uQ93KTg/видео.html .
      I did RAID 0 testing with Gen 4 NVMes on 4 different PCIe adapters. Two Gen 4 Quad NVMe PCIe adapters and two Gen 3 Quad NVMe PCIe adapters (Asus Hyper M.2 V2 vs AORUS Gen 4 AIC, HP Z Turbo Drive Quad Pro and JEYI Quad U.2 to M.2 adapters).
      My quest for RAID 0 is not for reliability - but just for speed. I use the drive as a temporary storage archive for video editing. My current RAID 0 pool is 8TB (4x 2TBs in RAID 0 in the Asus Hyper), which allows me to essentially clone what is on a 10TB HDD (same directory structure) but without the HDD's speed limit. When you handle multiple video files in different directories it becomes basically impossible to edit from the HDD. So RAID 0 is purely a fast scratch drive with temporary storage and it should be treated like it will vanish (one drive fails they all fail - living dangerously!).
      My plan is to create a RAID 5 pool of ~ 5-6 10TBs initially using TrueNAS Core (it's in my case-swapped HP Z440 - there's a video for that also haha). So the in-use data lives on the RAID 0, and that is backed up to a 10TB rust spinner which is backed up to a RAID 5 HDD pool (with a 1TB NVMe for SLOG) via a small 10GbE network (HP Z8 G4 to Unify Flex-XG switch to HP Z440). Eventually I'll work towards hardware RAID for added performance.
      Your system is quite powerful,
      ZFS with 8x Exos X18 drives in RAID-Z2 is a solid performing drive pool, and reaching 1GB/s is the modern day is the ideal. Combine that with 10GbE networking and you have a potent system with no major bottlenecks.
      A Ryzen Pro 5975WX is at the upper end of the spectrum, solid performer! How you managed to afford 512GB of RAM (I presume DDR4 3200MHz) I have no idea! haha. I 'cheaped' out with 256GB of DDR4 2933 MHz ECC server Regulated RAM and that cost nearly the same as the whole workstation... 8x 990 Pros in ZFS's 'RAID 10' for your main machine is impressive - how have you connected those NVMes to your system? The most optimal method I am aware of would be a 5.25" Bay adapter (the Icy Dock MB873MP-B would be a dream for that), and I presume hardware RAID?