A bios update for my Sabertooth X99 would cause drives to just drop saying they were bad even though they weren't. I caught this before I lost too many. Not all were as lucky sadly.
"Hard won experience" - This... I felt this when you said it. So many nights working through the AORUS RAID tools both SATA and Nvme. So many more nights making server 2019 Core work with an AORUS board to begin with.
Oh man even better we were doing this at the same time roughly it seems... ~4 weeks ago I built 3 AMD RAID systems to move CHIA plots long term and also had one with weird performance that turned out to be ONE BAD CABLE. New cable from a box of 6 and 1 just wasn't up to the job.
@@timothygibney159 to each their own. Gigabyte has always gone above and beyond for me with the couple of RMAs I've had in the last 15 years and the only complaints I have are about RAID which is more AMD and Gigabytes choice of certain Intel LAN and WAN modules. Some aren't compatible with certain OSes due to driver silliness by Intel. Of all the board vendors, Gigabyte has earned the most credibility with me and 4/5 systems in my home run on their boards. The 1 other is an ASROCK build.
@@daemonfox69 I have to keep taking the cmos battery out once a week to keep it booting. My 2nd nvme drives keeps disappearing and this is the 2nd gigabyte board with the same problem. They won't rma their boards either
I've also had good experiences with Gigabyte, not enough to say it's my go-to, but if it's an option I'll definitely consider it. But I wouldn't use an Aorus as a server, tho.
@@MrBearyMcBearface I was kidding child Raid is just for no reason at this point cause it no longer does anything other then use the word raid and compromise your data cause people no longer care aboot raid to save data integrity
Growing up I got into so many re-install scenarios with my PC builds due to my own ignorance about RAID. One particular build in the early-2000s I was doing a RAID 0 setup with two WD Raptor 74GB drives using a PCI (not PCIe) RAID card. I reinstalled Windows and games so many times troubleshooting corrupted drives that to this days I remember the majority of CD keys for my big games from that era. Since those days I've generally stayed away from using RAID, although recently I started messing with ZFS and Windows Storage Pool stuff. Thanks for yet another fun video, L1T!
Thank you for bringing this up. I spent a lot of time struggling with RAID on my x370 gigabyte board and ultimately I just had to ditch the idea and bought a larger SSD with a huge gaming/backup HDD.
Thank you for this video! Your wording is straight and to the point without being too meandering. Even when you have little asides you're keeping each one to the point. You rock!
I'm sorry but the phrase "avoid it like the plague" has been cancelled. Recent evidence suggests the average person does not, in fact, make any attempt to avoid the plague.
@@thelegalsystem I've heard of some people knowing a friend/family member who died from "the thing" and they still don't care or say it isn't real, apparently even death isn't good enough to take something seriously these days.
@@thelegalsystem Thank you. You must be ultra left. You can go around threatening a president you don't like, but the minute a person says there are only two genders, they should be locked up for hate speech.
15:35 Makes sense. I once had a MOBO SSD RAID0 (using two earlier Sandisks) on a MSI 990FX board for boot and OS (WIN10). It was good performance when the Raid was new but over time the writes slowed to just 150MB/s coming from over 600MB/s. The reads were not affected much. I even thought the disks were getting 'worn out' but I removed them from the Raid and formatted then and turns out the disks were pretty much good the same. Maybe TRIM issues were the prob.
So do you recommend Windows software raid? One big issue with Windows software raid is that whenever the PC is shutdown uncleanly (like a crash), windows wants to do a full sync again. And when using something like a 12 TB HDD, such a full sync takes 50 hours or so. And it restarts whenever you restart your PC. Now when you PC is never running for 50 hours straight, that full sync can never actually finish, and you hear HDDs working the whole time while using the PC. It's not great. I haven't found a solution or better way for that yet.
Well, I agree, RAID on motherboards hasn't been overly hot. But doesn't using RAID on SSD's pose problems with Trim and also increase (exponentially) write amplification?
Depends on the kind of RAID and the parity type you decide to go. Raid 0, 1 and 10 would probably pose little to no issues for SSDs, it would be the ones that contain a parity on each device for rebuilding, and when it comes to rebuilding the RAID it would potentially cause other issues. ( Just using what I was told in school ) Edit: I wouldn't know much about trimming though
RAID5 has natural write amplification because changing 1 byte, requires reading in blocks on another disk and then calculating parity and writing that too. RAID10 and increasing the disk budget was always a better option, as simplicity saved more than doubling disks
I'm a big fan of the linux raid10 with the F2 layout, even with just 2 drives. Read performance is identical to raid 0, write identical to raid 1. Not sure if it still matters with fast SSDs (compared to the near-2 layout), but I don't really see any downsides.
@soyel94 Raid 10 requires 4 disks not 3. Raid 5 just don't do it. Drives fail on rebuild. Raid 10 is superior to using parity when it comes time to rebuild.
I used Intel motherboard RAID for 6 years with no issues and it saved me from a hard drive death. I used AMD motherboard RAID for less than 4 months and it nuked Windows twice and cost me thousands of dollars in lost work. AMD fans are broken in the brain.
I wanted to set up a non booting RAID with a B550 aorus master board, dug up and installed 2 unused spinny 1TB drives, etc. but stopped when I learned that even a BIOD update can damage the array. RAID plan dropped. Please guide on what could be a good (and safe) solution for a machine that dual boots between windows 10 and Linux ubuntu 21.04. I want redundancy and speed so that I can use it as a data location along with my main M.2 980 pro 1TB drive. I have an old PCIE SATA expansion card, maybe that will free me from the BIOS update array loss threat.
My workstation runs 4 905P drives mounted on a Hyper 16X in VROC RAID 0. The performance was pretty untouchable when considering both sequential and 4KQ1T1 until the P5800X came out. After some tuning and OCing I get 200-220MB/S 4KQ1T1 read which is nuts for a drive that also has insane sequential read.
The very famous FIO test is running in the background of CrystalDiskMark. If you are try to benchmark the device using FIO directly, you should get the exact benchmarking experience or the actual read and write speed.
Since trying RAID 0 & 1 many many years ago and having issues (on HDD’s), I never went back to using it for my home / gaming pc’s. I’m not into IT and tech support etc which some of these RAID options for businesses has it perks. Great video thou. Appreciate the work you do on this channel and the love your content 🥰🥰
Hi professional 30 second commenter here. I quit using Mobo raid for windows pooled storage. Performance isnt much if at all better, but if the mobo dies or you want to swap from one computer to another windows comp. It just does it. Saved me when i went from intel to amd. Next up, buying a pcie raid controller for the speed.
Excellent video as always. One minor update: NVMe is not relegated to solely SSDs. As of NVMe 2.0 (released before this video was published) NVMe could be used to access HDDs. HOWEVER... ... this does nothing to negate what is said here. PCIe HDDs are not typically marketed to or offered to consumer systems. The TP (Technical Proposal, TP4088 for those who care; integrated into NVMe v2.0) was designed to allow hyperscalers to use the same NVMe driver for both SSDs and HDDs, which simplified management and upgrades.
Ya, a ramble of a video indeed. I know the topic was about motherboard RAID (specficially AMD firmware) but all focus was lost after your returned from your sponser message. ...so anyways, I have three 1TB WD spinning hard drives I'll raid together in RAID 0 as my backup and Steam game library volume. It won't be my OS volume - that'll go on a single nvme drive (with no raid to speak of.)
I have an x570 creator Asrock running in a film scanning host machine with a 4x qvo 870s 4tb running in raid 0. My raid is meant to take a raw 4k 12bit dpx stream and write each frame file at about 14-16 frames per second. It runs quite well, only had one fault with it in over a year but it was just a drive error that corrupted 5 dpx files out of over 300,000. I only lost a day or two of work rebuilding the raid because I didn't trust it with client film. Its been working great since the rebuild and I have a spare ready if the problem drive finally breaks. Ive probably passed 400-500 tbs in scanning data through these drives by now with no issues other than stated above
that was fun. tried two 480gig optaine drives on a Gen 1 threadripper using amd raid.....intel forever lost points on that move. all my servers are epyc now.
For me, the only RAID that works, at affordable cost, is Linux md raid; you can setup a raid 1 root system drive as long as you have a separate non raid boot partition to store the kernel , bootloader and initramfs. For data, it just works as expected and is rock solid as long as you check periodically the drives status or report drive errors by mail for example.
Correct, the dynamic disk software raid used in disk management is considered obsolete/depreciated/legacy by Microsoft at this point (as you would expect as it was introduced in Windows/Server 2000!) and was replaced with storage spaces.
The problem with m.2 raid usually the first m.2 is direct to the cpu and the other 3 are on the chipset. So you need to look at the block diagram to see what is getting switched.
The writeback caching inconsistency isn't so much about whether the drives are silvered, it's about whether the writes happen in the correct order. ie when doing an atomic mv of one file over another, writing the metadata for the mv before writing the data of the new file to disk, resulting in an atomic obliteration when the software stack expects this to be impossible and applies no other mitigations. Writeback allows things to be written out of order, ie not synchronously.
When discussing Windows Raid it would be helpful to clarify the two types, that through disk manager, and those via Storage Spaces. Worth mentioning that some features are depreciated - for example spanned disks. Storage Spaces is the preferred method for Windows software Raid. For me you can't beat hardware raid with a decent memory cache and battery backed write caching. RST motherboard using Raid 0 is a fast option, but no data resilience - good for test/lab systems only.
Intel Matrix Storage is nice in that the metadata format is supported by Linux (via mdadm) so it is a shame it cannot be relied upon. It greatly simplifies setup when you need to boot from the array.
Well this validates the mess I got into trying to RAID 1 some "data" drives in conjunction with some other drives in non-RAID (AHCI mode). OS was straight up non-RAID on the M.2 drives, same mobo. Serenity now! Luckily no data was lost but did take some time to recover, my itching for a RAID controller seems well founded now.
I agree, any PC basic user motherboards suck. Now I use server level motherboard and it better but I love PCIe SAS controller external for my use even with SATA SSD drives. I have done much on this topic on my channel... BTW classifying spinning hard drives as each drive being RAIDS 0 and then using software RAID is not so good ether... Best to use them as single single mode only and that is very basic as in the slowest ship in your fleet. Great video dude...
I revived an old server using a 6 SAS enterprise grade HDD single raid 10 array (3 TB usable) for everything - boot and storage. I tested with simulated HDD failures and its very smooth and stable. When replacing a drive, it was seamless with no noticeable effect on performance during the restoration process. I used a perc raid card but I'm sure it would not have been as smooth with motherboard sata raid.
The Highpoint 370 controller and VIA RAID found on my old motherboards really does look as if it was done well by comparison. Though their main problem was the PCI bottleneck.
I used RAID 0 on HDDs and at that time HDDs were offering much higher capacity per dollar. Performance is as expected; double the read and write for sequential and a slight increase for random. Been using it for about 3 years with no issues and I'm overall happy with it. Would I go for RAID in the future? Definitely not as SSDs have become so cheap. RAID on SSDs seems a little dumb because NVMe is already so fast and when PCIE 5 becomes the norm, it just wouldn't make sense.
about F'n time you guys talked about this. I had to find out this the hard way myself. Motherboard BIOS fake RAID was a huge waste of time, and buggy AF on my X399 MEG Creation. Windows (also fake) RAID is the way to go with a PCIe Expander card. My intentions was just to get max READ/WRITE speeds with RAID0. Wasn't bold enough or wanting to deal with hell of making it a bootable RAID so I didn't go that route.
Could you make a video or point me to a video on zfs for / ? We have SM and Dell PE servers in our environment that primary use ZFS for their data stores however for root we then do a md raid1 for that bit of reliability and if I could have a one size fits all that would be wonderful :P
Could you do a video explaining the scaling issues with Optane for the consumer? It's very fascinating how much that segment has stalled. As we move more and more to the cloud at a consumer level, you would think a 128gb-256gb optane-only computer system would be the end-goal for consumer performance.
You just saved me a ton of time! Thank you so much! I have a question though, if anyone could help. I recently built a new Ryzen 5900X + RTX 3070 + MSI X570 Tomahawk multipurpose system. Crucial P1 NVME as a boot drive (good enough for me) and a Seagate 2TB HDD for storage. I will add a NAS to my setup some time in the future. But I wanted a large-ish drive for games (non critical data) that would be at the same time relatively fast compared to a regular HDD and cheap (also environmentally friendly; I'll explain). So, I have a few 500GB HDDs lying around that I got for free, were not in use, and could be considered e-waste. I decided to populate all remaining SATA ports on my MOBO with them and make a 5 drives RAID-0 array as my games drive using Windows Disk Manager's RAID giving me a fast-ish 2.5-ish TB games drive. It's working fine. So much that I delayed testing the MOBO RAID indefinitely. My question is: does this setup make sense to you? Is there anything that I could do better? Thanks again! Great content
I'm not one to comment on videos much but I have to say that this one saved my bacon. Been using RAIDXpert2 for a while using RAID 10 on (4) 8TB SATA drives and always felt that the performance was not where it should have been. Over the past couple of weeks, I've been having some bad performance issues so early this AM I decided to blow away the RAID array and dig deeper as to what was causing the problem. Come to find out that one of the drives was transferring well below what it should have been and it ended up being a faulty SATA cable. At that point, I ended up creating a new RAID 10 setup in windows using disk management/storage spaces and the performance is much better.
Im curious how this translates to Threadripper systems. My Zenith II has 5xNVME slots, all of them on the CPU, Im using 4x500Gb NVME drives in a quad RAID0 and I was able to get the expected throughput using custom testing in IOmeter but Crystal Disk Mark was....lets go with "random" at best for the numbers it spat out. Would it be worth me using an Optane drive on its own for my bootable drive in that 5th slot and then use Windows RAID for the 4xRAID0 drives and bypass AMD's driver all together?
That's what I needed to know about the Intel motherboard RAID. I was hoping, and from looking at a few other videos and seeing the config in the BIOS, I was expecting this to be a hardware RAID. My experience with a hybrid has not been good. I needed the Windows Server free backup that I also had scheduled once a week as SOP in case there were ever an issue with the primary backup technology which was EMC's StorageCraft ShadowProtect. EVER happened and after many hours on the phone with StorageCraft, we both realized that that they weren't actually getting an operating system restorable backup with the new hybrid controller that Dell switched to as their standard server controller that they didn't document as a hybrid and the OS came pre-installed on the server. We got a hardware controller from them for that server but immediately verified with them that none of the other systems had one of their hybrids.
I found this video looking for help with VROC on a x299 motherboard. After I filled in some of the pieces I came back here to share. 2:52 The EVGA SR-3 DARK has a C622 chipset. Third party SSDs on the approved list will work on this board with a hardware key. 3:37 I too have spent a lot of time and money looking to get VROC to work on my x299. You have to use Intel drives for VROC RAID to work on a x299 system. Third party drives will show up and work as a single drive only, if you have a key. The VROC application in Windows will notify you of a RAID error; it's the third party non-RAID drives in a VROC PCIe slot. 3:57 My OS did not see the volume until I installed the drivers. 4:24 You can only use RAID0 without a key. A standard key ($120.00) will allow RAID 0/1/10; Pro key ($250.00+) adds RAID5. 4:50 Intel 670p will work on x299. 12:00 if you use write back caching, get a UPS. Side note: Intel VROC (VMD NVMe RAID) ports on his EVGA SR-3 DARK should be hot swapable. My storage goals OS: VROC RAID1 (2x2TB NVMe) Data in use: VROC RAID0 (4x2TB NVMe) Long term data: Intel RST RAID5 (5x8TB spinning rust with hot spare) RAID is not a backup, get a RAID for your RAID.
I tried to set up 2 x M.2 drives in a RAID 0 on my X570 board in January. It sucked so bad that I just ended up using the software RAID in Windows. Works great, was easy to do, and I haven't had a single problem out of it.
Mobo raid is down to what chipset they use. Unless its an intel raid setup I would steer clear. Been using mobo raid 0 for years on msi boards with no issues. Granted this is purely for gaming on games that benefit from fast loads like open world types and other games that stream data as you play. However with the newest nvme sticks pushing close to 4000 MB/s sequential reads raid 0 is looking less and less shiny.
Have had two 2TB intel 660p m.2s in raid0 across z370 and now x570 for years now. x570 was a little bit of a pain to setup but intel was effortless. Have had no issues /shrug (Just wanted one 4TB drive was sick of multiple drives, everything of value is stored on NAS so if the volume dies whatevs)
We use RAID 60 in our key NAS units. We have employed the classic PCI board from LSI (now owned by BroadCom IIRC) "Megaraid". It has worked flawlessly for 7 years. We of course bought the supercapacitor backup addon, which is crucial so that power loss doesn't corrupt the directories. Raid 5 has a known mathematical flaw, and should be avoided. Use Raid 6 instead. Our only design flaw in our system is that we only have 1 Gbit ethernet to the switch, and that slows things down. When reading big chunks the RAID engine is actually pretty fast. With mechanicals inside, however, it does take 3 or 4 hours (!!!) to reboot our 2000 virtual machines. So next time we will use SSD.
Problem with windows software raid if you have a unclean shutdown, it assumes it needs to resync data and so you get a slow rebuild forced on you, and I found out from Macrium documentation a while back that dynamic disks in windows are depreciated. So I stopped using it. However software raid in Linux and BSD is awesome, and I stay away from hardware raid and onboard raid systems.
RAID is another word for headache if you are a person that never backups files its even worst. Only reason I use it is the availability of sata drives at a low price and I get to keep running on the remaining drive if something happens. I keep stuff on raid, and 2 other backups. When it does fail I rebuild a new drive then after a while rebuild another new one. Problems is drives are becoming less easy to find with all the store closings. I usuallly build systems that I can upgrade later on over a 6 to 10 year period. M.2 and ssd are getting down in price, so just making backup images takes minutes instead of hours.
I cannot tell you how much I appreciate the time in and the effort! I am currently upgrading to trx40 and have plans to use a raid card. I will take your advice to heart Wendell. Will use windows to configure the raid 0. Undecided on weather or not go with a bootable raid. The only advantage that I am aware of is faster boot times. Thank You Again Wendell!! I would Sub to the channel........ but I already have.
I use a raidcard in my wks, running raid 6 on 4 15000rpm discs. bloody fast. it's used as a local datadisc, that regulary writes to a zfs system. it writing to the zfs array, the whole system literally needs to wait for the 1gbit connection most of the time, the raidcard is only a 3gbps controller, fast enough for local
I needed RAID-1 for my storage HDDs (nothing special 2x 3TB). I've read multiple comments, that Windows soft-RAID is great and recommended over Intel-RAID. So I've listened to them and set up a Windows-RAID (the old way, converting to a dynamic volume and serving up redundancy drive via Disk Management). It crashed 2 times in 3 weeks, and I can't even count how many times it was rebuilt. Finally, I gave up and switched to Intel RAID (Intel Rapid Storage) - no problems since then. No single RAID rebuild.
@Level1Techs Hi Wendell, Can you point me to the forum threads? I wrote the internal Intel performance manuals and developer automation. Perhaps I can provide recommendations and when I have some spare time I can write up some simple AI/ML automation scripts. Personally, I use the Designate x299 10G with 10 NVMe SSDs + VROC + TPM 2.0 + 10980XE + 256 GB 3600 MHz DRAM; which requires special firmware from Gigabyte Engineering in Ubuntu 20.04 LTS x64 and Windows 10 x64. P.S. If you look up my patents, we have something much better coming to a theater near you 😉
forum.level1techs.com/t/critiquing-really-shitty-amd-x570-also-b550-sata-ssd-raid1-10-performance-sequential-write-speed-merely-a-fraction-of-what-it-could-be/172541/27 Nice to meet you, sure docs and whatever you need to use the awesome including the bios is good. I get the impression some at Intel didn't think there were enough enthusiasts to bother documenting the awesome
I've been trying for days to set up my asus b450m (prime-a II) with a 1tb nvme drive, a 500GB SATA SSD for the OS[s], and 3 HDDs in RAID 0. I liked the idea of motherboard RAID because I don't trust win10 to not be awful in reliability and function. Problem is when I enable RAID mode, the nvme drive didn't show up in the bios, and only showed up in windows setup if I installed sata raid drivers during setup (this would make both ssds and the raid array show up, until I tried to set up a storage space in windows, which would make the raid array disappear when making the storage pool) It seems the ONLY way to actually use all the drives is to do it in AHCI and use windows storage spaces While I did spend a ton of hours persisting when I probably shouldn't have, it wasn't time wasted. I learned a lot about storage and BIOS function windows drivers behave, and familiar with installing drivers at OS setup.
Literally my only use case for RAID is that I only see a single C: drive in Windows. There's no way other than motherboard NVMe RAID to combine two 2TB drives into a *bootable* combined 4TB volume; I'd even take lower performance in RAID than single drives just so I see a single drive.
Is it possible for grub or other boot loader to be able to boot Windows/Linux from media from which the Bios has no boot support such as Software Raid, or in my case PCI-E NVMA? Assuming Grub is sitting on the media for which BIOS has no boot problems.
Hi. On Linux you only need one boot partition (/boot) as non raid to store your boot files (kernel, initramfs and bootloader such as grub). Then all the rest of your system ( root drive /, /home etc..) can be stored in different raid volumes. For example, I tend to use this scheme on servers: one /boot partition (non raid) + one alternative /boot2 partition on a second drive (non raid but synched periodically using rsync), a raid 1 volume (two partitions, one per drive) for the root filesystem / and one raid 5 array (at least 3 partitions on 3 drives) for your data in /home. There are several tutorials on how to do that for the different linux distributions. It is usually supported for all "server" distros but it can be a bit complicated for a regular desktop install...
Wish I'd seen this video a few months ago. Build my first PC in 15 years back in January. 5900x on x570 platform. Last time I built a PC, RAID was the standard for storage, so if 1x gen4 NVMe can hit 5k MBps reads, 2x in RAID0 would be even better, right? Wrong. After getting everything setup, I had 2x Corsair gen4 NVMe drives in RAID0 boot drive with all benchmarks showing a ton of performance left on the table. Nowhere near 5k MBps read. Was finally able to narrow the cause down to AMD's RAID driver. Such a headache to switch back to AHCI managed drives.
I did the same and saw only improvement in large block transfers (2x as expected in pure sequential reads), but everything else was slower with nVME RAID via RaidExpert2 and on-board config. Anyway, I can't see why anyone would boot off a RAID, especially with how fast M.2 sticks are these days. You're just asking for trouble.
I haven't had trouble using RAID on my motherboard with AMD RAID. I got two 2 TB 7200 RPM HDDs in RAID 0 for my steam library on my X570 Aorus Master mobo and its amazing I basically have a very cheap 4 TB SATA SSD now with read/write speeds average around 427 MB/s!
That made me think of the time I built a Windows server with motherboard RAID, and Windows refused to enable disk cache because it didn't see a battery backup. It was the slowest new install I've ever done and there was no fix at the time.
so on my epyc system with unraid is AMD suppressing sata errors? most of my drives are on the PCIe raid/IT mode controllers. i tend to raid large spinning rust drives. i tend to use real raid controllers but, i get the feeling a windows raid would be fine for 4x HDDs
Kinda think that in most 'ordinary' states - the SSD has kinda killed some of the reasons for RAID, and provide good speed out of box. The protection.. while valid, is equally well done by backup, which you still have to do if you choose RAID.. Note - I'm saying the above for ordinary. For server, or special soup - RAID still have magic sauce you might chase down, but anyways..
Was it a boot drive? Caching a boot drive from my experience is even riskier than raid so many issues with Optane as well. Ended up reverting many people to just straight NVMe boot drives.
Would enjoy a review of all the x570 Mobo's out there - specifically the ASUS ROG STRIX x570e-Gaming. Watched one of your old vids, you mentioned Sun Microsystem Sparc Servers, yup, that took me back couple of decades....
I currently have my OS (Win10) installed on a RAID0 array with 2x500GB SSD's, using the Gigabyte motherboard onboard RAID. Thinking about adding another 2x500GB SSD's and switching to RAID10. How much will performance change? In case it matters, this is a 12-year old custom build with an Intel 3770K, and it's a bulletproof workhorse. Never failed once in 12 years.
Yeah I only use x570's hardware raid just to create a small JBOD to mirror my NAS so I can upload it to B1, and have a second physical copy of my data. Not quite the 3-2-1 I want, but I'm getting there. Otherwise, since I don't have a real need for RAID, I just don't use it at all. Simple as that. Hell, even in TrueNAS, I just use mirrored VDEVs lol.
The original paper from 1988 that coined the name was "A Case for Redundant Arrays of Inexpensive Disks (RAID)" - and it should not be forgotten, even though there are some hardware vendors that would very much like to put the "Inexpensive" part to rest. You who you are, EMC and NetApp.
@@wolf2965 Yeah, it has been early on but AFAIK an advisory board in the pro-RAID pro-SAN standarisation council/body/conglomerate decided to switch that "supposedly pejorative or diminishing or unrealistic" term to 'Independent' back in the 90s...
@@werewolfmoney6602 mirroring may be as often used, especially in enterprise settings, though for the added security instead of performance... but nether it nor even JBOD configuration could be treated as trully independent if data and metadata can be located on different devices, regardless of file system or hardware choices. the independent factor rather stems from using separate/independent devices to form a storage pool instead of using bigger and potentially more performant/powerful devices...
What will help: DMI 4.0, released on November 4 2021 with 600 series chipsets, has 8 lanes each providing 16 GT/s, two times faster compared to DMI 3.0 x8
Revisit on x670e? Amd also says there is different drivers for different cpu. Did and cc? Been a year they had to have fixed bugs by now and new platform.
What's funny is that I experienced the opposite with my Asus B570 board. I got poor results with the windows disk management raid 0, and around a 10-15% boost in speeds with the AMD Raid Software. I am however not using SSD's, but x5 WB 4TB HDD's.. Something to consider as well, is the heat on those SSD's, and how they slow down after being ran hard for long period of times which will also lead to weird intermittent R/W speeds. I also noticed your using cheap SSD's, which are notorious for getting hot and slowing down.
Even a decent Samsung NVME, if ran for long periods of say PLOT creation for HDD coin farming will start to flip-flop. Latency will sky rocket above 1 Million, and IOPS sink randomly.
Thanks for the suggestion. I have an ASUS B-550 F Gaming MB and wish to setup two 4TB WD drives in Raid 1. I will test setup in the BIOS to see if that delivers the expected results of 1x write & 2x read speed. 😁
@@stevetech1949 I need to correct myself, I have the Asus B-550 F Gaming MB as well not the 570. I am up to x5 4TB HDD's that I have put into a Raid 0 array. With the 550 board, don't forget that if you use the second nvme slot, it turns off your last two SATA ports on the board. Ohh and with the 5 drives in RAID 0, currently hitting around 600-700MB transfer rates on the array.
Onboard RAID is one of those things I abandoned once SSDs really took off. It used to be the only way to make your computer actually faster because HDDs were so god damn slow.
I decided to do a raid0 array with two m.2 gen 3 drives on a x570 aorus master. Then I decided I wanted to try a vfio setup with a 5900x, 6800xt merc, and a 1070ti. Unfortunately pcie lanes are running thin so I might wait for new the new threadripper. In conclusion I will always find a reason to build a bigger and faster system and your videos always help hahaha thank you!! Edit: The raid0 is a non-bootable volume
i wonder if they have fixed these issues yet? i am concidering doing sata raid0 over a nvme raid0 on x570 or maybe even wait for x690/x790 if they make that..... yes i put 90 for a reason i think tr40 should go away and bring back max pcie to desktop. :)
I just want to have one big media drive for steam games. Wish we could go back to the hdd days where you had much more storage than you could fill but with nvme speeds. With modern games at 250gb even a 4tb ssd won't get you far.
Any thoughts on Windows Storage Spaces? I've used Unraid and TrueNAS but on Windows I can use Backblaze's unlimited off-site backup for 7 dollars per month. The Windows setup I have now is technically worse in just about every other way. If backblaze supported Linux on their personal backup plan I'd switch back in a heartbeat. Is there any Linux supported offsite backup solution that'll keep my 11ish TB safe for 7 bucks a month?
@@Level1Techs I'll give it a go on a subset of my data. There's something to be said for Backblaze's flat-fee unlimited capacity backup - it means it doesn't take up mindspace. I'm not sure how much the deduplication and compression on tarsnap would actually cost per month. It's 0.25c per GB, which would be very expensive if it worked the same way as typical backup solutions. I'd need to test with some of my data.
I used primo cache for a few years but I burned through my Intel 750's write lifetime. These days I'd rather avoid all types of SSD cache because they shift read heavy workloads to write heavy unduly. All SSD all the way.
I must admit I have used Raid 0 on my last three systems and had no noticeable issues and definitely a noticeable improvement. This latest build however used 3 x nvme disks for the array and it was a waste, basically only give the speed of two drives. Still would recommend dual arrays to people just looking for performance.
RAID works by having multiple legends and heroes living in the shadows.
Hahaha
RAID SHADOW LEGENDS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
GET OFF OF MY HEAD!!1!1!!1
Boooooo
The one time I would have accepted a raid shadow legends sponsor spot
I am taking classes for my IT A+ certification tests, and we just started talking about RAID this week!!! Thanks for the info!!!
Good luck on your exam!
Never since a security bios patch update ate my entire array.
A bios update for my Sabertooth X99 would cause drives to just drop saying they were bad even though they weren't. I caught this before I lost too many. Not all were as lucky sadly.
"Hard won experience" - This... I felt this when you said it. So many nights working through the AORUS RAID tools both SATA and Nvme. So many more nights making server 2019 Core work with an AORUS board to begin with.
Oh man even better we were doing this at the same time roughly it seems... ~4 weeks ago I built 3 AMD RAID systems to move CHIA plots long term and also had one with weird performance that turned out to be ONE BAD CABLE. New cable from a box of 6 and 1 just wasn't up to the job.
Aorus boards are junk. I will not buy one again even if they have great caps
@@timothygibney159 to each their own. Gigabyte has always gone above and beyond for me with the couple of RMAs I've had in the last 15 years and the only complaints I have are about RAID which is more AMD and Gigabytes choice of certain Intel LAN and WAN modules. Some aren't compatible with certain OSes due to driver silliness by Intel.
Of all the board vendors, Gigabyte has earned the most credibility with me and 4/5 systems in my home run on their boards. The 1 other is an ASROCK build.
@@daemonfox69 I have to keep taking the cmos battery out once a week to keep it booting. My 2nd nvme drives keeps disappearing and this is the 2nd gigabyte board with the same problem. They won't rma their boards either
I've also had good experiences with Gigabyte, not enough to say it's my go-to, but if it's an option I'll definitely consider it.
But I wouldn't use an Aorus as a server, tho.
This man goes through so much pain so you don't have to. Only total respect.
The only raid I would EVER do is raiding the pantry for orange soda and snacks.
nvme for the win beats raid 10 fold for speed atleast
@@raven4k998 but what if you raid 0 two nvme drives
@@MrBearyMcBearface I was kidding child Raid is just for no reason at this point cause it no longer does anything other then use the word raid and compromise your data cause people no longer care aboot raid to save data integrity
@@MrBearyMcBearface then you raided two nvme drives for what reason????
there was a time when hardware RAID was the real "pro" deal... and software RAID was bad... has the tables turned?
Some times it flips. Then it flops. The times they are a-changin' ...
Growing up I got into so many re-install scenarios with my PC builds due to my own ignorance about RAID. One particular build in the early-2000s I was doing a RAID 0 setup with two WD Raptor 74GB drives using a PCI (not PCIe) RAID card. I reinstalled Windows and games so many times troubleshooting corrupted drives that to this days I remember the majority of CD keys for my big games from that era.
Since those days I've generally stayed away from using RAID, although recently I started messing with ZFS and Windows Storage Pool stuff. Thanks for yet another fun video, L1T!
Thank you for bringing this up. I spent a lot of time struggling with RAID on my x370 gigabyte board and ultimately I just had to ditch the idea and bought a larger SSD with a huge gaming/backup HDD.
Thank you for this video! Your wording is straight and to the point without being too meandering. Even when you have little asides you're keeping each one to the point. You rock!
"Intermittent" the worst thing to hear when talking tech.
I'm sorry but the phrase "avoid it like the plague" has been cancelled.
Recent evidence suggests the average person does not, in fact, make any attempt to avoid the plague.
Stay hiding away if you're afraid of germs
@@timramich I hope someone you love is taken from you
"Avoid it like responsibilities" is the new phrase.
@@thelegalsystem I've heard of some people knowing a friend/family member who died from "the thing" and they still don't care or say it isn't real, apparently even death isn't good enough to take something seriously these days.
@@thelegalsystem Thank you. You must be ultra left. You can go around threatening a president you don't like, but the minute a person says there are only two genders, they should be locked up for hate speech.
Can you do a comparison between file systems that support raid (ZFS/Btrfs) and also solutions like intel rapid storage?
Wendell: "Theres something wrong with the reads"
Me: LITERACY!!
15:35 Makes sense. I once had a MOBO SSD RAID0 (using two earlier Sandisks) on a MSI 990FX board for boot and OS (WIN10). It was good performance when the Raid was new but over time the writes slowed to just 150MB/s coming from over 600MB/s. The reads were not affected much. I even thought the disks were getting 'worn out' but I removed them from the Raid and formatted then and turns out the disks were pretty much good the same. Maybe TRIM issues were the prob.
So do you recommend Windows software raid? One big issue with Windows software raid is that whenever the PC is shutdown uncleanly (like a crash), windows wants to do a full sync again. And when using something like a 12 TB HDD, such a full sync takes 50 hours or so. And it restarts whenever you restart your PC. Now when you PC is never running for 50 hours straight, that full sync can never actually finish, and you hear HDDs working the whole time while using the PC. It's not great. I haven't found a solution or better way for that yet.
You probably shouldn't be using RAID on any system that doesn't have expected uptimes to rebuild that volume.
Well, I agree, RAID on motherboards hasn't been overly hot. But doesn't using RAID on SSD's pose problems with Trim and also increase (exponentially) write amplification?
Depends on the kind of RAID and the parity type you decide to go. Raid 0, 1 and 10 would probably pose little to no issues for SSDs, it would be the ones that contain a parity on each device for rebuilding, and when it comes to rebuilding the RAID it would potentially cause other issues. ( Just using what I was told in school )
Edit: I wouldn't know much about trimming though
I imagine non-NAND flash drives would fair better.
RAID5 has natural write amplification because changing 1 byte, requires reading in blocks on another disk and then calculating parity and writing that too.
RAID10 and increasing the disk budget was always a better option, as simplicity saved more than doubling disks
Trim and optimization works with raid as long as the array isn't dynamic
Great intro, Wendell.
Yes, we do come for the rambling. :3
I'm a big fan of the linux raid10 with the F2 layout, even with just 2 drives. Read performance is identical to raid 0, write identical to raid 1. Not sure if it still matters with fast SSDs (compared to the near-2 layout), but I don't really see any downsides.
I run a RAID 5 with a RAID 10 cache in front, it's excellent.
@soyel94 Raid 10 requires 4 disks not 3. Raid 5 just don't do it. Drives fail on rebuild. Raid 10 is superior to using parity when it comes time to rebuild.
I used Intel motherboard RAID for 6 years with no issues and it saved me from a hard drive death. I used AMD motherboard RAID for less than 4 months and it nuked Windows twice and cost me thousands of dollars in lost work. AMD fans are broken in the brain.
I've been building and fixing computers since 1999, and I've probably spent 3 years of my computing life dealing with raid0. I love it!
i love how youtube's compression had so much trouble dealing with your shirt
I wanted to set up a non booting RAID with a B550 aorus master board, dug up and installed 2 unused spinny 1TB drives, etc. but stopped when I learned that even a BIOD update can damage the array. RAID plan dropped.
Please guide on what could be a good (and safe) solution for a machine that dual boots between windows 10 and Linux ubuntu 21.04. I want redundancy and speed so that I can use it as a data location along with my main M.2 980 pro 1TB drive. I have an old PCIE SATA expansion card, maybe that will free me from the BIOS update array loss threat.
Thanks for your time and effort.
Absolutely no one
Level 1 Techs: Let's talk about raid which has gone almost obsolete in PC world
Why is wanting to have uninterrupted operation in the event of a (boot) drive failure something that should ever become obsolete?
My workstation runs 4 905P drives mounted on a Hyper 16X in VROC RAID 0. The performance was pretty untouchable when considering both sequential and 4KQ1T1 until the P5800X came out. After some tuning and OCing I get 200-220MB/S 4KQ1T1 read which is nuts for a drive that also has insane sequential read.
905s are lovely second hand, have a bunch of them and not a one is below 90% worn.
A place of love.
I feel you my Shalomie.
The very famous FIO test is running in the background of CrystalDiskMark. If you are try to benchmark the device using FIO directly, you should get the exact benchmarking experience or the actual read and write speed.
Since trying RAID 0 & 1 many many years ago and having issues (on HDD’s), I never went back to using it for my home / gaming pc’s. I’m not into IT and tech support etc which some of these RAID options for businesses has it perks. Great video thou. Appreciate the work you do on this channel and the love your content 🥰🥰
Hi professional 30 second commenter here.
I quit using Mobo raid for windows pooled storage. Performance isnt much if at all better, but if the mobo dies or you want to swap from one computer to another windows comp. It just does it. Saved me when i went from intel to amd.
Next up, buying a pcie raid controller for the speed.
Excellent video as always. One minor update: NVMe is not relegated to solely SSDs. As of NVMe 2.0 (released before this video was published) NVMe could be used to access HDDs. HOWEVER...
... this does nothing to negate what is said here. PCIe HDDs are not typically marketed to or offered to consumer systems. The TP (Technical Proposal, TP4088 for those who care; integrated into NVMe v2.0) was designed to allow hyperscalers to use the same NVMe driver for both SSDs and HDDs, which simplified management and upgrades.
I'd like to see a PCIe lane suffer from the boredom of transferring the ~200mb/s a spinny disk can output.
Great vid as always. Also loving the background music. Sounds like bopping through cyberspace. Anyone know what it is?
Hello! This song is called Vital Whales by Unicorn Heads. I found it through the RUclips audio library. ~ Editor Autumn
Ya, a ramble of a video indeed. I know the topic was about motherboard RAID (specficially AMD firmware) but all focus was lost after your returned from your sponser message.
...so anyways, I have three 1TB WD spinning hard drives I'll raid together in RAID 0 as my backup and Steam game library volume. It won't be my OS volume - that'll go on a single nvme drive (with no raid to speak of.)
This is super informative and delivered in an entertaining way as always, Wendell!
Nice to have a clear answer. Thanks
I have an x570 creator Asrock running in a film scanning host machine with a 4x qvo 870s 4tb running in raid 0. My raid is meant to take a raw 4k 12bit dpx stream and write each frame file at about 14-16 frames per second. It runs quite well, only had one fault with it in over a year but it was just a drive error that corrupted 5 dpx files out of over 300,000. I only lost a day or two of work rebuilding the raid because I didn't trust it with client film. Its been working great since the rebuild and I have a spare ready if the problem drive finally breaks. Ive probably passed 400-500 tbs in scanning data through these drives by now with no issues other than stated above
Intels Marketing also blocked NON-Intel NVME drives from working on Z590 platform.
that was fun. tried two 480gig optaine drives on a Gen 1 threadripper using amd raid.....intel forever lost points on that move. all my servers are epyc now.
For me, the only RAID that works, at affordable cost, is Linux md raid; you can setup a raid 1 root system drive as long as you have a separate non raid boot partition to store the kernel , bootloader and initramfs. For data, it just works as expected and is rock solid as long as you check periodically the drives status or report drive errors by mail for example.
Question, isn't the disk manager raid in windows single threaded and they recommend a storage space to take advantage of additional cores/threads?
Correct, the dynamic disk software raid used in disk management is considered obsolete/depreciated/legacy by Microsoft at this point (as you would expect as it was introduced in Windows/Server 2000!) and was replaced with storage spaces.
The problem with m.2 raid usually the first m.2 is direct to the cpu and the other 3 are on the chipset. So you need to look at the block diagram to see what is getting switched.
The writeback caching inconsistency isn't so much about whether the drives are silvered, it's about whether the writes happen in the correct order. ie when doing an atomic mv of one file over another, writing the metadata for the mv before writing the data of the new file to disk, resulting in an atomic obliteration when the software stack expects this to be impossible and applies no other mitigations. Writeback allows things to be written out of order, ie not synchronously.
Is there a follow up to this? Also is there is a way to enable logging of these errors in the AMD system!?
I read the Intel board raid works with non-intel SSDs if you get the more expensive key.
When discussing Windows Raid it would be helpful to clarify the two types, that through disk manager, and those via Storage Spaces. Worth mentioning that some features are depreciated - for example spanned disks. Storage Spaces is the preferred method for Windows software Raid. For me you can't beat hardware raid with a decent memory cache and battery backed write caching. RST motherboard using Raid 0 is a fast option, but no data resilience - good for test/lab systems only.
Intel Matrix Storage is nice in that the metadata format is supported by Linux (via mdadm) so it is a shame it cannot be relied upon. It greatly simplifies setup when you need to boot from the array.
Have an onboard RAID0 array with two WD Caviar Blacks from over a decade ago that are still running strong. These drives are really built to last.
Nice! What's your motherboard?
Well this validates the mess I got into trying to RAID 1 some "data" drives in conjunction with some other drives in non-RAID (AHCI mode). OS was straight up non-RAID on the M.2 drives, same mobo. Serenity now! Luckily no data was lost but did take some time to recover, my itching for a RAID controller seems well founded now.
I can confirm a 100% watch rate for the rambles. Deliver us some wisdom silicon daddy!
I agree, any PC basic user motherboards suck. Now I use server level motherboard and it better but I love PCIe SAS controller external for my use even with SATA SSD drives. I have done much on this topic on my channel... BTW classifying spinning hard drives as each drive being RAIDS 0 and then using software RAID is not so good ether... Best to use them as single single mode only and that is very basic as in the slowest ship in your fleet. Great video dude...
I revived an old server using a 6 SAS enterprise grade HDD single raid 10 array (3 TB usable) for everything - boot and storage. I tested with simulated HDD failures and its very smooth and stable. When replacing a drive, it was seamless with no noticeable effect on performance during the restoration process. I used a perc raid card but I'm sure it would not have been as smooth with motherboard sata raid.
The Highpoint 370 controller and VIA RAID found on my old motherboards really does look as if it was done well by comparison. Though their main problem was the PCI bottleneck.
I used RAID 0 on HDDs and at that time HDDs were offering much higher capacity per dollar. Performance is as expected; double the read and write for sequential and a slight increase for random. Been using it for about 3 years with no issues and I'm overall happy with it.
Would I go for RAID in the future?
Definitely not as SSDs have become so cheap. RAID on SSDs seems a little dumb because NVMe is already so fast and when PCIE 5 becomes the norm, it just wouldn't make sense.
about F'n time you guys talked about this. I had to find out this the hard way myself.
Motherboard BIOS fake RAID was a huge waste of time, and buggy AF on my X399 MEG Creation. Windows (also fake) RAID is the way to go with a PCIe Expander card.
My intentions was just to get max READ/WRITE speeds with RAID0. Wasn't bold enough or wanting to deal with hell of making it a bootable RAID so I didn't go that route.
Could you make a video or point me to a video on zfs for / ?
We have SM and Dell PE servers in our environment that primary use ZFS for their data stores however for root we then do a md raid1 for that bit of reliability and if I could have a one size fits all that would be wonderful :P
i am two years late, but it's "redundant array of INDEPENDANT disks"... thank for the awesome video!
Inexpensive is correct.
So is Independent.
INDEPENDANT is wrong.
Could you do a video explaining the scaling issues with Optane for the consumer? It's very fascinating how much that segment has stalled. As we move more and more to the cloud at a consumer level, you would think a 128gb-256gb optane-only computer system would be the end-goal for consumer performance.
You just saved me a ton of time! Thank you so much! I have a question though, if anyone could help.
I recently built a new Ryzen 5900X + RTX 3070 + MSI X570 Tomahawk multipurpose system. Crucial P1 NVME as a boot drive (good enough for me) and a Seagate 2TB HDD for storage. I will add a NAS to my setup some time in the future. But I wanted a large-ish drive for games (non critical data) that would be at the same time relatively fast compared to a regular HDD and cheap (also environmentally friendly; I'll explain).
So, I have a few 500GB HDDs lying around that I got for free, were not in use, and could be considered e-waste. I decided to populate all remaining SATA ports on my MOBO with them and make a 5 drives RAID-0 array as my games drive using Windows Disk Manager's RAID giving me a fast-ish 2.5-ish TB games drive. It's working fine. So much that I delayed testing the MOBO RAID indefinitely. My question is: does this setup make sense to you? Is there anything that I could do better?
Thanks again! Great content
@Leve1Techs Which driver did you use for AMD RAID on Linux? Is there a new one? Or just the 17.2.1 that is over 4 years old ?
I'm not one to comment on videos much but I have to say that this one saved my bacon. Been using RAIDXpert2 for a while using RAID 10 on (4) 8TB SATA drives and always felt that the performance was not where it should have been. Over the past couple of weeks, I've been having some bad performance issues so early this AM I decided to blow away the RAID array and dig deeper as to what was causing the problem. Come to find out that one of the drives was transferring well below what it should have been and it ended up being a faulty SATA cable. At that point, I ended up creating a new RAID 10 setup in windows using disk management/storage spaces and the performance is much better.
Im curious how this translates to Threadripper systems. My Zenith II has 5xNVME slots, all of them on the CPU, Im using 4x500Gb NVME drives in a quad RAID0 and I was able to get the expected throughput using custom testing in IOmeter but Crystal Disk Mark was....lets go with "random" at best for the numbers it spat out.
Would it be worth me using an Optane drive on its own for my bootable drive in that 5th slot and then use Windows RAID for the 4xRAID0 drives and bypass AMD's driver all together?
That's what I needed to know about the Intel motherboard RAID. I was hoping, and from looking at a few other videos and seeing the config in the BIOS, I was expecting this to be a hardware RAID. My experience with a hybrid has not been good. I needed the Windows Server free backup that I also had scheduled once a week as SOP in case there were ever an issue with the primary backup technology which was EMC's StorageCraft ShadowProtect. EVER happened and after many hours on the phone with StorageCraft, we both realized that that they weren't actually getting an operating system restorable backup with the new hybrid controller that Dell switched to as their standard server controller that they didn't document as a hybrid and the OS came pre-installed on the server. We got a hardware controller from them for that server but immediately verified with them that none of the other systems had one of their hybrids.
I found this video looking for help with VROC on a x299 motherboard. After I filled in some of the pieces I came back here to share.
2:52 The EVGA SR-3 DARK has a C622 chipset. Third party SSDs on the approved list will work on this board with a hardware key.
3:37 I too have spent a lot of time and money looking to get VROC to work on my x299. You have to use Intel drives for VROC RAID to work on a x299 system. Third party drives will show up and work as a single drive only, if you have a key. The VROC application in Windows will notify you of a RAID error; it's the third party non-RAID drives in a VROC PCIe slot.
3:57 My OS did not see the volume until I installed the drivers.
4:24 You can only use RAID0 without a key. A standard key ($120.00) will allow RAID 0/1/10; Pro key ($250.00+) adds RAID5.
4:50 Intel 670p will work on x299.
12:00 if you use write back caching, get a UPS.
Side note: Intel VROC (VMD NVMe RAID) ports on his EVGA SR-3 DARK should be hot swapable.
My storage goals
OS: VROC RAID1 (2x2TB NVMe)
Data in use: VROC RAID0 (4x2TB NVMe)
Long term data: Intel RST RAID5 (5x8TB spinning rust with hot spare)
RAID is not a backup, get a RAID for your RAID.
I tried to set up 2 x M.2 drives in a RAID 0 on my X570 board in January. It sucked so bad that I just ended up using the software RAID in Windows. Works great, was easy to do, and I haven't had a single problem out of it.
Mobo raid is down to what chipset they use. Unless its an intel raid setup I would steer clear. Been using mobo raid 0 for years on msi boards with no issues. Granted this is purely for gaming on games that benefit from fast loads like open world types and other games that stream data as you play. However with the newest nvme sticks pushing close to 4000 MB/s sequential reads raid 0 is looking less and less shiny.
Have had two 2TB intel 660p m.2s in raid0 across z370 and now x570 for years now. x570 was a little bit of a pain to setup but intel was effortless. Have had no issues /shrug
(Just wanted one 4TB drive was sick of multiple drives, everything of value is stored on NAS so if the volume dies whatevs)
We use RAID 60 in our key NAS units. We have employed the classic PCI board from LSI (now owned by BroadCom IIRC) "Megaraid". It has worked flawlessly for 7 years. We of course bought the supercapacitor backup addon, which is crucial so that power loss doesn't corrupt the directories.
Raid 5 has a known mathematical flaw, and should be avoided. Use Raid 6 instead.
Our only design flaw in our system is that we only have 1 Gbit ethernet to the switch, and that slows things down. When reading big chunks the RAID engine is actually pretty fast. With mechanicals inside, however, it does take 3 or 4 hours (!!!) to reboot our 2000 virtual machines. So next time we will use SSD.
Problem with windows software raid if you have a unclean shutdown, it assumes it needs to resync data and so you get a slow rebuild forced on you, and I found out from Macrium documentation a while back that dynamic disks in windows are depreciated. So I stopped using it.
However software raid in Linux and BSD is awesome, and I stay away from hardware raid and onboard raid systems.
True, but only on mirrored drives. The striped drives are not affected.
RAID is another word for headache if you are a person that never backups files its even worst. Only reason I use it is the availability of sata drives at a low price and I get to keep running on the remaining drive if something happens. I keep stuff on raid, and 2 other backups. When it does fail I rebuild a new drive then after a while rebuild another new one. Problems is drives are becoming less easy to find with all the store closings. I usuallly build systems that I can upgrade later on over a 6 to 10 year period. M.2 and ssd are getting down in price, so just making backup images takes minutes instead of hours.
TRIM appears to be working on my system - but Optimize Drives (defrag) shows my NVME RAID 1 array as a "Hard disk drive". Is that problematic?
Quick question: would having 2 mechanical disk instead increase the read speeds in Raid 0 & 1? These issues seem to be when using NVMe/SSDs
Yes, but you'd still need ZFS for integrity so motherboard RAID is still pointless
I cannot tell you how much I appreciate the time in and the effort!
I am currently upgrading to trx40 and have plans to use a raid card.
I will take your advice to heart Wendell.
Will use windows to configure the raid 0.
Undecided on weather or not go with a bootable raid.
The only advantage that I am aware of is faster boot times.
Thank You Again Wendell!!
I would Sub to the channel........ but I already have.
I use a raidcard in my wks, running raid 6 on 4 15000rpm discs. bloody fast. it's used as a local datadisc, that regulary writes to a zfs system.
it writing to the zfs array, the whole system literally needs to wait for the 1gbit connection most of the time, the raidcard is only a 3gbps controller, fast enough for local
How about file system level raid like with zfs or btrfs?
I needed RAID-1 for my storage HDDs (nothing special 2x 3TB). I've read multiple comments, that Windows soft-RAID is great and recommended over Intel-RAID. So I've listened to them and set up a Windows-RAID (the old way, converting to a dynamic volume and serving up redundancy drive via Disk Management). It crashed 2 times in 3 weeks, and I can't even count how many times it was rebuilt. Finally, I gave up and switched to Intel RAID (Intel Rapid Storage) - no problems since then. No single RAID rebuild.
@Level1Techs Hi Wendell, Can you point me to the forum threads? I wrote the internal Intel performance manuals and developer automation.
Perhaps I can provide recommendations and when I have some spare time I can write up some simple AI/ML automation scripts.
Personally, I use the Designate x299 10G with 10 NVMe SSDs + VROC + TPM 2.0 + 10980XE + 256 GB 3600 MHz DRAM; which requires special firmware from Gigabyte Engineering in Ubuntu 20.04 LTS x64 and Windows 10 x64.
P.S. If you look up my patents, we have something much better coming to a theater near you 😉
forum.level1techs.com/t/critiquing-really-shitty-amd-x570-also-b550-sata-ssd-raid1-10-performance-sequential-write-speed-merely-a-fraction-of-what-it-could-be/172541/27
Nice to meet you, sure docs and whatever you need to use the awesome including the bios is good. I get the impression some at Intel didn't think there were enough enthusiasts to bother documenting the awesome
I've been trying for days to set up my asus b450m (prime-a II) with a 1tb nvme drive, a 500GB SATA SSD for the OS[s], and 3 HDDs in RAID 0. I liked the idea of motherboard RAID because I don't trust win10 to not be awful in reliability and function.
Problem is when I enable RAID mode, the nvme drive didn't show up in the bios, and only showed up in windows setup if I installed sata raid drivers during setup (this would make both ssds and the raid array show up, until I tried to set up a storage space in windows, which would make the raid array disappear when making the storage pool)
It seems the ONLY way to actually use all the drives is to do it in AHCI and use windows storage spaces
While I did spend a ton of hours persisting when I probably shouldn't have, it wasn't time wasted. I learned a lot about storage and BIOS function windows drivers behave, and familiar with installing drivers at OS setup.
Literally my only use case for RAID is that I only see a single C: drive in Windows. There's no way other than motherboard NVMe RAID to combine two 2TB drives into a *bootable* combined 4TB volume; I'd even take lower performance in RAID than single drives just so I see a single drive.
Is it possible for grub or other boot loader to be able to boot Windows/Linux from media from which the Bios has no boot support such as Software Raid, or in my case PCI-E NVMA?
Assuming Grub is sitting on the media for which BIOS has no boot problems.
Hi. On Linux you only need one boot partition (/boot) as non raid to store your boot files (kernel, initramfs and bootloader such as grub). Then all the rest of your system ( root drive /, /home etc..) can be stored in different raid volumes. For example, I tend to use this scheme on servers: one /boot partition (non raid) + one alternative /boot2 partition on a second drive (non raid but synched periodically using rsync), a raid 1 volume (two partitions, one per drive) for the root filesystem / and one raid 5 array (at least 3 partitions on 3 drives) for your data in /home. There are several tutorials on how to do that for the different linux distributions. It is usually supported for all "server" distros but it can be a bit complicated for a regular desktop install...
Wish I'd seen this video a few months ago. Build my first PC in 15 years back in January. 5900x on x570 platform. Last time I built a PC, RAID was the standard for storage, so if 1x gen4 NVMe can hit 5k MBps reads, 2x in RAID0 would be even better, right? Wrong. After getting everything setup, I had 2x Corsair gen4 NVMe drives in RAID0 boot drive with all benchmarks showing a ton of performance left on the table. Nowhere near 5k MBps read. Was finally able to narrow the cause down to AMD's RAID driver. Such a headache to switch back to AHCI managed drives.
I did the same and saw only improvement in large block transfers (2x as expected in pure sequential reads), but everything else was slower with nVME RAID via RaidExpert2 and on-board config. Anyway, I can't see why anyone would boot off a RAID, especially with how fast M.2 sticks are these days. You're just asking for trouble.
I haven't had trouble using RAID on my motherboard with AMD RAID. I got two 2 TB 7200 RPM HDDs in RAID 0 for my steam library on my X570 Aorus Master mobo and its amazing I basically have a very cheap 4 TB SATA SSD now with read/write speeds average around 427 MB/s!
That made me think of the time I built a Windows server with motherboard RAID, and Windows refused to enable disk cache because it didn't see a battery backup. It was the slowest new install I've ever done and there was no fix at the time.
so on my epyc system with unraid is AMD suppressing sata errors? most of my drives are on the PCIe raid/IT mode controllers. i tend to raid large spinning rust drives. i tend to use real raid controllers but, i get the feeling a windows raid would be fine for 4x HDDs
Kinda think that in most 'ordinary' states - the SSD has kinda killed some of the reasons for RAID, and provide good speed out of box. The protection.. while valid, is equally well done by backup, which you still have to do if you choose RAID..
Note - I'm saying the above for ordinary. For server, or special soup - RAID still have magic sauce you might chase down, but anyways..
I have had fuzedrive completely blow out a partition of mine necessitating a complete system reload (without fuzedrive)
Was it a boot drive? Caching a boot drive from my experience is even riskier than raid so many issues with Optane as well. Ended up reverting many people to just straight NVMe boot drives.
How about RAID on dedicated external controllers such as LSI 9360 series. Worth it or same as motherboard RAID?
Would enjoy a review of all the x570 Mobo's out there - specifically the ASUS ROG STRIX x570e-Gaming.
Watched one of your old vids, you mentioned Sun Microsystem Sparc Servers, yup, that took me back couple of decades....
I currently have my OS (Win10) installed on a RAID0 array with 2x500GB SSD's, using the Gigabyte motherboard onboard RAID. Thinking about adding another 2x500GB SSD's and switching to RAID10. How much will performance change? In case it matters, this is a 12-year old custom build with an Intel 3770K, and it's a bulletproof workhorse. Never failed once in 12 years.
Great video!! Thank you
Yeah I only use x570's hardware raid just to create a small JBOD to mirror my NAS so I can upload it to B1, and have a second physical copy of my data. Not quite the 3-2-1 I want, but I'm getting there. Otherwise, since I don't have a real need for RAID, I just don't use it at all. Simple as that. Hell, even in TrueNAS, I just use mirrored VDEVs lol.
RAID stands for Redundant Array of Independent Disks... not Inexpensive Disks :) even though so many people think of it this way ;)
Supposedly it can be independent or inexpensive, I've heard both and seen both on documentation.
The original paper from 1988 that coined the name was "A Case for Redundant Arrays of Inexpensive Disks (RAID)" - and it should not be forgotten, even though there are some hardware vendors that would very much like to put the "Inexpensive" part to rest. You who you are, EMC and NetApp.
@@wolf2965 Yeah, it has been early on but AFAIK an advisory board in the pro-RAID pro-SAN standarisation council/body/conglomerate decided to switch that "supposedly pejorative or diminishing or unrealistic" term to 'Independent' back in the 90s...
Also, because most raid configs use striping, the disks aren't even independent anyway
@@werewolfmoney6602 mirroring may be as often used, especially in enterprise settings, though for the added security instead of performance... but nether it nor even JBOD configuration could be treated as trully independent if data and metadata can be located on different devices, regardless of file system or hardware choices. the independent factor rather stems from using separate/independent devices to form a storage pool instead of using bigger and potentially more performant/powerful devices...
Point of the video made in 6 seconds. Love it and I do agree.
What will help:
DMI 4.0, released on November 4 2021 with 600 series chipsets, has 8 lanes each providing 16 GT/s,
two times faster compared to DMI 3.0 x8
Revisit on x670e? Amd also says there is different drivers for different cpu. Did and cc? Been a year they had to have fixed bugs by now and new platform.
Wayy back when you could get a MS soft raid volume to boot...but it was very tricky.
What's funny is that I experienced the opposite with my Asus B570 board. I got poor results with the windows disk management raid 0, and around a 10-15% boost in speeds with the AMD Raid Software. I am however not using SSD's, but x5 WB 4TB HDD's.. Something to consider as well, is the heat on those SSD's, and how they slow down after being ran hard for long period of times which will also lead to weird intermittent R/W speeds. I also noticed your using cheap SSD's, which are notorious for getting hot and slowing down.
Even a decent Samsung NVME, if ran for long periods of say PLOT creation for HDD coin farming will start to flip-flop. Latency will sky rocket above 1 Million, and IOPS sink randomly.
Thanks for the suggestion. I have an ASUS B-550 F Gaming MB and wish to setup two 4TB WD drives in Raid 1.
I will test setup in the BIOS to see if that delivers the expected results of 1x write & 2x read speed. 😁
@@stevetech1949 I need to correct myself, I have the Asus B-550 F Gaming MB as well not the 570. I am up to x5 4TB HDD's that I have put into a Raid 0 array. With the 550 board, don't forget that if you use the second nvme slot, it turns off your last two SATA ports on the board. Ohh and with the 5 drives in RAID 0, currently hitting around 600-700MB transfer rates on the array.
Onboard RAID is one of those things I abandoned once SSDs really took off. It used to be the only way to make your computer actually faster because HDDs were so god damn slow.
I decided to do a raid0 array with two m.2 gen 3 drives on a x570 aorus master. Then I decided I wanted to try a vfio setup with a 5900x, 6800xt merc, and a 1070ti. Unfortunately pcie lanes are running thin so I might wait for new the new threadripper. In conclusion I will always find a reason to build a bigger and faster system and your videos always help hahaha thank you!!
Edit: The raid0 is a non-bootable volume
i wonder if they have fixed these issues yet? i am concidering doing sata raid0 over a nvme raid0 on x570 or maybe even wait for x690/x790 if they make that..... yes i put 90 for a reason i think tr40 should go away and bring back max pcie to desktop. :)
I just want to have one big media drive for steam games. Wish we could go back to the hdd days where you had much more storage than you could fill but with nvme speeds.
With modern games at 250gb even a 4tb ssd won't get you far.
Any thoughts on Windows Storage Spaces? I've used Unraid and TrueNAS but on Windows I can use Backblaze's unlimited off-site backup for 7 dollars per month. The Windows setup I have now is technically worse in just about every other way. If backblaze supported Linux on their personal backup plan I'd switch back in a heartbeat. Is there any Linux supported offsite backup solution that'll keep my 11ish TB safe for 7 bucks a month?
Tarsnap? Storage spaces is pretty not great except for the narrow use case it's designed for
@@Level1Techs I'll give it a go on a subset of my data. There's something to be said for Backblaze's flat-fee unlimited capacity backup - it means it doesn't take up mindspace. I'm not sure how much the deduplication and compression on tarsnap would actually cost per month. It's 0.25c per GB, which would be very expensive if it worked the same way as typical backup solutions. I'd need to test with some of my data.
I used primo cache for a few years but I burned through my Intel 750's write lifetime. These days I'd rather avoid all types of SSD cache because they shift read heavy workloads to write heavy unduly.
All SSD all the way.
I must admit I have used Raid 0 on my last three systems and had no noticeable issues and definitely a noticeable improvement. This latest build however used 3 x nvme disks for the array and it was a waste, basically only give the speed of two drives. Still would recommend dual arrays to people just looking for performance.