4 NVMe Drives on a Single PCI Card ($25) - My New TrueNAS Cache Solution

Поделиться
HTML-код
  • Опубликовано: 4 апр 2021
  • I have a lot of PCIe lanes in my new Home Server so I wanted to give this cheap NVMe PCI card from AliExpress a shot...and it works pretty well.
    XiWai Card: www.aliexpress.com/item/10050...
    Proxmox Disk Passthrough Guide: pve.proxmox.com/wiki/Passthro...)
  • НаукаНаука

Комментарии • 131

  • @timvherpen
    @timvherpen 2 года назад +82

    IT mode and IR mode are terms used when describing the two kinds of firmware LSI RAID Controllers can have.
    This card has nothing to do with a RAID controller.
    AHCI is only used when you plug-in SATA M.2 cards.
    NVMe is native to PCI-E so there is no conversion on the adapter card when using NVMe drives.

  • @ralmslb
    @ralmslb 2 года назад +41

    8:30 this is expected, these PCI adapters have no logic in them and all they do is physically split the PciE traces, that is why you need bifurcation supported by the motherboard.
    As each NvME drive uses a 4x link, it maxes out at 4 drives on a 16x PciE slot.
    You will never see the adapter anywhere as it has no controller, all its doing is splitting the 16x PciE traces into 4 4xPciE slots while also splitting power.
    The OS will see the drives directly only.

  • @zparihar
    @zparihar 2 года назад +66

    Just a suggestion... You should have tested performance and showed us the benchmarks

    • @PoeLemic
      @PoeLemic 2 года назад +3

      But, he did ... Didn't he? He used Crystal Disk Mark at 11:29. That's very useful in having a basic idea of how the drives / storage device will perform.

    • @DavidMBebber
      @DavidMBebber 2 года назад +3

      I would be interested in the performance boost in his TrueNAS machine. ...also why use Proxmox and a TrueNAS VM vs TrueNAS Scale on bare metal, which has KVM for any other VMs?

    • @rudypieplenbosch6752
      @rudypieplenbosch6752 2 года назад +1

      Yes exactly, what is the performance improvement ?, my guess is there is no improvement. If you use ZFS in a synchronous more safe mode of writing data, which is a slower but more secure way of writing data, only in that case will a ZFS log cache increase performance. But it will always be less fast than using the less secure way of writing data to your ZFS pools. Lawrence mentions this in some of his tutorials.

    • @leexgx
      @leexgx Год назад

      @@bananafartmanmd8775 any motherboard that supports PCIe bifurcation will work (splits the 16x slot into 4x4pcie lanes there is zero logic in this card it's just directly wired to 4 lanes of the slot) nothing to do with Intel vRoc

    • @leexgx
      @leexgx Год назад +1

      @UC7InT0ngXR50dhyDDtX0oJg it's a China product so they say what ever they want on the label, this is purely a simple direct to 4x4pcie Lane card only requirement is bifurcation support for the bios to split it into 4x4 lanes (it will work with vRoc but not a requirement)
      this card has no smarts and it's not an "IT mode" card as he stated in the video, IT mode would be HBA card or has a bridge chip of some sort

  • @psycl0ptic
    @psycl0ptic 2 года назад +12

    Crystal Diskmark has a mode for NVME (settings - NVME), you should be selecting NVME when testing NVME drives. Also 1GB is way too small for a device that can read and write at over 1GBps. Select 4 or 8 or better yet 16GB for your test size.

  • @Accuaro
    @Accuaro Год назад +4

    I plan on doing the same with 4 Optane P1600X 118GB NVMe SSDs

  • @Maisonier
    @Maisonier 3 года назад +1

    Amazing video! liked and subscribed.

  • @phychmasher
    @phychmasher 2 года назад +3

    11:00 that's why we check the "Quick Format" box on the screen just before this.

  • @terrysimons
    @terrysimons Год назад +1

    I'm using a RIITOP M.2 NVMe SSD to PCI-e 3.1 x8/x16 Card in x4 mode on my TrueNAS MiniX + server with 110mm Samsung server drives, which have power outage protection, in a mirror configuration for zlog write cache.

  • @thomaseikeberg1753
    @thomaseikeberg1753 3 года назад +6

    Also, as I understand zfs, slog only applies to syncron writes, cache for asyncron writes are done purely in ram.

    • @RaidOwl
      @RaidOwl  3 года назад +1

      Yeah I need to read more into ZFS for sure.

    • @KifKroker
      @KifKroker Год назад

      I also think that 512GB slog device is insanely large. I think L2ARC might provide a benefit as only 64 Gb of RAM are available over a +24TB pool. But not sure, would be cool to see some tests !

    • @TheRealMrGuvernment
      @TheRealMrGuvernment Год назад

      @@KifKroker SLOG is only good for certain scenarios like NFS shares and SLOG SSD need power protection on them and be able to take massive write endurance, as in Optane, desktops SSDS are shit for SLOG.

  • @pbrigham
    @pbrigham 2 года назад +4

    Ok, so in end what was the improvement on transfer speeds compared what you had before ?

  • @cenubit
    @cenubit 2 года назад +1

    I use 4x 2TB SAMSUNG NVME QNAP RAID card QM2-4P-384 as RAID5. And use this VOL for VM (Ubuntu K8s Cluster) work well.

  • @arcforceworld
    @arcforceworld 2 года назад

    Good find with that bit of hardware, price done jumped up to 50= bucks now lol!

  • @xsillycarnifex
    @xsillycarnifex Год назад

    I've had issues w/ the Silicon Power A60 drives on a Asus Hyper m.2 x16 on a Threadripper build (so it has plenty of PCIE lanes) where the drives are randomly disconnecting and throwing errors. It's become very unstable and I'm starting to lose faith in the brand. I'm not sure if you are having these sorts of issues, but I'm curious if you ran this longer term.

  • @StenIsaksson
    @StenIsaksson Год назад +3

    The software RAID in Disk Management doesn't support TRIM, because it turns the disks to dynamic disks and dynamic disks doesn't support TRIM
    I had to use Storage Spaces instead to create the volume.

  • @sergusy7005
    @sergusy7005 2 года назад +7

    Hi. I’ve been using two Samsung 512gb nvme drives as cashe on my qnap nas. After a couple of months of usage the cashe has eaten 12% of each of nvme drives’ health. So, cashe is great thing! But comes with the cost of the cashe’s drives which you’ll have to replace ones a year…..

    • @trashtrashisfree
      @trashtrashisfree 11 месяцев назад

      Update the BIOS on your drives it may reduce that problem in the future.

    • @Ivanos
      @Ivanos 11 месяцев назад

      hahaha you never did chia plotting on nvme? then you really gonna complain ;) 60% health left per nvme after +300TB writes.

  • @st2vik422
    @st2vik422 Год назад

    I also would like to see some real speed tests before/after but nevertheless amazing tip :o)

  • @trumanhw
    @trumanhw 3 года назад +1

    Interesting. Unfortunately, that company raised the price to 166% ($58 now) ...
    of course, it's the price plus the SSDs, cool nonetheless. I just bought a ZIL to play with (one of those Radian RMS-200 8GB) ... and was thinking about getting a Re-TIMER or Re-Driver via SuperMicro, the variants of which I can't quite recall why I chose retimer version).
    My FIRST chore..? (after releasing my data as HOSTAGE to a ZVOL with DEDUPLICATION ENABLED!!) ..? Re-config to TrueNAS core with the hopes of config'ing some option within supporting tiered (NVMe) storage -- and making a 3rd dedicated pool for things I'd like quick access to for use with the 4x 4TB NVMe SSDs I've had for a bit ... (I do have several T320 but I might move away from that to something quieter / more efficient which has a non-proprietary IPMI).
    Wouldn't mind seeing another video of before-and-after benchmarking if you're comfortable using Bonnie++ or some other IOPS test / benchmark ... as well as checking both your SMB performance via macOS and from Windows -- as you're clearly using a 15.4in MBPr of some sort.

  • @kmmmoney
    @kmmmoney Год назад

    Wait, what about the performance of the log cache drives on Truenas? Every forum I've read says any ssd cache in truenas is marginally going to improve reads except for adding a ton of RAM

  • @JonatanCastro-secondary
    @JonatanCastro-secondary 2 года назад +3

    Excellent video man! I'm super curious how much the cache disks improved your read/write speeds if you ever did a test about it. Thanks!

    • @TheRealMrGuvernment
      @TheRealMrGuvernment Год назад +1

      Likely 0, because unless your ARC (ram) is full, it wont use L2ARC at all.

  • @elad1770
    @elad1770 2 года назад

    Do you think my mother board supports what you were talking about? I have aorus x470 gaming 7 wifi.
    I would appreciate if someone can respond

  • @cszulu2000
    @cszulu2000 8 месяцев назад

    Would love to see performance on a single card and on all 4 simultaneously. Then is it comptable with unraid?

  • @attmrcmailik9653
    @attmrcmailik9653 11 дней назад

    id like to do the same but with two notable changes:
    1) Gen5 instead of gen3
    2) For my home workstation/PC, not NAS

  • @jimholloway1785
    @jimholloway1785 Год назад

    What is the model # of this motherboard and is this the motherboard on your TrueNas server?? Do you have a video of building your TrueNAS computer, it would be good to see exactly what you have in the server/case and all the devices.

  • @mascaradninja9838
    @mascaradninja9838 3 года назад +4

    not getting the full performance (4x ssd speed) is because windows speed transfers limits
    if you want the full performance go with windows server or linux

    • @RaidOwl
      @RaidOwl  3 года назад +2

      Surprising because I’ve seen pretty high speeds in Windows. I know Windows file transfer is single threaded, but I don’t think that’s the bottleneck here. Does it have something to do with the striping? Either way I’ll give it a shot on one of my Ubuntu VMs, thanks!

    • @mascaradninja9838
      @mascaradninja9838 3 года назад

      @@RaidOwl I mean maybe you are right but tell me the results also lines talked about it in the honey badger review you can see it maybe you will find a solution

    • @RaidOwl
      @RaidOwl  3 года назад +1

      For sure, I’m curious now. I didn’t really expect amazing results out of a $25 card. I’ll let you know what I find!

    • @mascaradninja9838
      @mascaradninja9838 3 года назад

      @@RaidOwl thanks

  • @pjasonq
    @pjasonq Год назад +1

    It wasn't 100% clear on how much benefit you got from the NVMe caching. I want to setup a 10Gbe NAS for home use...would you recommend 2x 8TB standard HDDs and 1 or 2 NVMe SSDs for caching? With the NVMe caching...will you get 10Gbe speeds or should I not bother at all and get a few 2TB SSDs and put them into a pool?

  • @rbashiyo
    @rbashiyo 9 месяцев назад

    @Raid Owl, Is it possible to enable RAID mode in 4?

  • @JavierChaparroM
    @JavierChaparroM 2 года назад +1

    HAHAHA I sure didn't expected the grandpa in steroids

  • @OzzFan1000
    @OzzFan1000 2 года назад +2

    What has a bigger impact on performance? RAM or NVMe cache?

    • @Digi20
      @Digi20 Год назад

      as a general rule of thumb: always max out the ram of a truenas system before you start adding cache layers. btw: this video is not very good (if ever) researched and has some details wrong or explains them in a not suffficent way. truenas caching is a very deep topic and you will not see much benefits (or even loose performance) if you "just add some nvme ssds" to a system, with the possibility of severely hurting data integrity on top.

  • @walter_lesaulnier
    @walter_lesaulnier 3 года назад +5

    The first computer I built had a 4004 processor, 2 KB of RAM that cost $300, and no permanent storage. (And a hexadecimal keypad and 9" monochrome green screen)

    • @RaidOwl
      @RaidOwl  3 года назад +2

      2 KB of RAM...oh my

  • @zhanko73
    @zhanko73 Год назад +1

    How stable is the drives passed through to the TrueNAS? On TrueNAS forum one of the moderator who looks experienced not only does not suggest to passthrough drives, but also do not suggest to passthrough controller. Instead they suggest to install TrueNAS on bare metal due to stability issues.

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 5 месяцев назад +1

      They always do that, still, lots of people run their NAS virtualized and experience no problems with them whatsoever.

    • @zhanko73
      @zhanko73 5 месяцев назад

      @@BoraHorzaGobuchul Since I did the same and there is no issue so far. The only stability issue was related a power cable. Also I am looking for a process to convert raid5 to raid6 (of course this is called raid z1 and raid z2 if I am correct) in zfs by adding an additional hdd. So far I read this is not possible...

    • @BoraHorzaGobuchul
      @BoraHorzaGobuchul 5 месяцев назад +1

      @@zhanko73 it isn't, afaik. There's talk of done new expansion functionality in the works but it's been a long time and still no news there. The only option I know of is creating a new vdev and migrating data to it

  • @quickben2090
    @quickben2090 2 года назад +1

    Any updates? Does it still work?

  • @creed5248
    @creed5248 Год назад

    I never get decent speeds off of add on Pci-e cards - even the Pcie4 cards are way slower than the mobo interface ??

  • @philippemiller4740
    @philippemiller4740 3 года назад +3

    Pci isn't the same as pcie. Do you need a cpu / motherboard that supports lane bifurcation to use this kind of adapter?

    • @RaidOwl
      @RaidOwl  3 года назад

      Yes your motherboard needs to support 4x4x4x4 bifurcation.

    • @philippemiller4740
      @philippemiller4740 2 года назад +2

      @@RaidOwl Sorry I missed that part in your video. Great video ! It just gets me when you say PCI instead of PCIe haha. I remember PCI 100/133 mhz slots they rocked ! White slots for the win !

  • @noxlupi1
    @noxlupi1 3 года назад +8

    Have you checked your slog hit rate? I highly doubt that this will give you much performance. Only NFS synced data, is affected by this.. 1TB of Nvme for a slog, seems a major waste, unless you use sync all on NFS unix shares, which is only useful for inflight data, like running VMs directly on the NAS. Those NVMEs would do better in a l2arc, which will be where to put your money, if you load stuff from the NAS (games, video, sound, samples etc.) If long term storrage is the main purpose of the NAS, cache is pretty abundant. For overall read/write performance, look into the newer Metadata VDEV. In this case I would use 3 in mirror, and the last one as hot spare. As a Meta Vdev is part of the whole datapool. This will give you way faster performance in both read and write, as only large sectors will go to the spinning rust, and index tables, small files etc. will go to the Meta Vdev. this will give you SSD like performance and seek time on your NAS. However, you will need to destroy a pool, and rebuild it with metadata vdev included.

    • @RaidOwl
      @RaidOwl  3 года назад

      Hey, thanks for the super informative post. I am not super well versed in storage tech but I've been learning. I assumed that an l2arc cache was only necessary if you're running out of RAM space for ZFS. I have read so many posts about why you should/shouldn't have an slog/l2arc cache and there never seems to be a general consensus. Again, thanks for your informative post; I appreciate it!

    • @noxlupi1
      @noxlupi1 3 года назад +3

      ​@@RaidOwl You are very welcome. I too have been reading forum up and down, in regards to ZFS, and it can be quite difficult to wrap your head around, although, simple ones you grasp it. The issue at times, seems to be that the information gets picked up and paid forward, without fully understanding it( again, how would one know, that they fully understand it ). I am somewhat getting confident in how the ZFS system works.
      It is true, that ram is the first cache arc. but it will only hold reasent data in memory, and it is fast as hell, but as you copy more data to the NAS, the cache in memory gets discarded, so when you need it at a later time, it no longer exists in cache. That's where the L2Cache kicks in. It retains a copy of reasent written and accessed data, so a 1TB l2arc, would give you NVME timing and speed, of the latest 1TB data that has been accessed, and it gets a new timestamp every time you access it. So a game you play on a daily or weekly basis, will always load from the l2Arc. Also, l2arc is now persistent, so unlike the main ram arc, l2arc cache is available immediately after reboot, and does not need to build up again.
      The slog is only for writing log info, for synced transfers, which slows down the transfer, as it has to write that to disk during "copy on write sync". Sync is not allowed to write this log to memory, as this would mark the data as "dirty" (data that is not yet written to and logged to disk) this is a security measure, to ensure you dont get inflight data corruption. For home use, turning on sync entirely will give you more performance, slog or no-slog. No sync, does not put your data in danger, only inflight data, which will get lost, just like when pulling the plug on your computer, it will be just fine, but you may have lost the last minute change.
      Metadata VDEV, is the new cache, this will really speed up the NAS, as what mostly slows down a NAS, is small files, databases and indexing. So splitting the data into classes: small files and big files, putting the small on SSD, and the big files on HDD is quite clever, as the small files, usually only take up 0.1 - 0.5 percent of the total storage, though they are many and slow for HDDs to transfer. Large files tranfer quite well on the HDDs, especially in a raid.
      Hope that shed some light on it. ( I base my conclusion on study and thorough testing )

    • @RaidOwl
      @RaidOwl  3 года назад

      Would you say that an l2arc is unnecessary if you have adequate RAM space? For example, if my RAM fills up and some files are loaded into the l2arc, would those files be hindered once the RAM space clears back up (since they are 'stuck' on l2arc instead of being able to be reloaded into RAM)? This sounds like a strange case that may come up with transferring a large amount of medium-ish sized files back and forth (maybe editing phots/video).
      The Metadta VDEV sounds cool and I'll definitely look into that on my main TrueNAS system.

    • @noxlupi1
      @noxlupi1 3 года назад +1

      @@RaidOwl No. It is a 1-2-3 priority. If it is not available in Ram, it will look to the l2arc, and then to the raid. I have 64gb Ram for my ZFS, and my arc gets hit from time to time, but mostly in loading samples for music production. For my storage of general data, it does next to nothing for me.
      Yes, the Metadata vdev, is closer to what most people want from a cache, while still retaining the ZFS file system features. Also, the Metadata vdev will render slog somewhat obsolete, as the reason that sync slows down the the write, is that it need to wrote a lot of small asynchronous data to disk. And this will go directly to SSD automatically. However, it will not replace the l2arc, as that is ready to use cache, of both big and small files, and it does not need to wait for the disks to spin up, or seek within big files. however, with meta vdev and sufficient ram, you will not feel much of a difference, and it will work for the whole pool, regardless of reboot, and amount of ram. Just be sure to make it safe. At least 3 mirrors. and in your case, make the last one a hot spare.
      You could put it like this. Arc(ram) is for workspace, it is for the currently active cache, and it is absolutely essential for VM's. l2arc, is for loading/copying/accessing older data. Metadata, will be a constant cache for small files, which are a nightmare for HDD's

    • @RaidOwl
      @RaidOwl  3 года назад

      Okay yeah I'm in the same boat there with 64 gigs of RAM. I think ill throw a Metadata vdev in there and monitor the changes in different use cases.

  • @patrickprafke4894
    @patrickprafke4894 Год назад

    Funny. I have a Samsung 950 Pro in my X58 system as a boot drive. and 4 more in a bifircation card in a windows stripe raid as my game drive. And have had that for about 4 years. Boot and load times are sick.

  • @magicmanj32
    @magicmanj32 3 года назад +1

    proxmox passthru page is empty

  • @bulzaiguard
    @bulzaiguard 3 года назад +1

    Hmm i've been looking at something like this but i would like 4.0 (future proofing i guess)
    So far i found the ASUS Hyper M.2 x16 Gen 4 Card
    But the price of this one might make me want to try it out as you said it's pretty cheap

    • @RaidOwl
      @RaidOwl  3 года назад

      Yeah I was close to buying the ASUS one too, but had to try out the AliExpress version first haha.

    • @trumanhw
      @trumanhw 3 года назад

      I don't think that's necessary ...

    • @TheRealMrGuvernment
      @TheRealMrGuvernment Год назад

      If your running TrueNAS more ram is always #1 before you go with an L2ARC (cache drive)

  • @elusoryowl2797
    @elusoryowl2797 3 года назад

    Just created a TrueNAS server some days ago but I'm considering buying just a single NVME drive for caching

    • @RaidOwl
      @RaidOwl  3 года назад +1

      For most people that’s perfectly fine.

  • @adrianTNT
    @adrianTNT Год назад +1

    4:35 did I hear a crack ? :)

  • @Dexter101x
    @Dexter101x 3 года назад +1

    I wouldn't use striped volume, because if one disk fails, you've lost all the data on that disk

    • @RaidOwl
      @RaidOwl  3 года назад

      Yeah it’s not optimal but it sounds cool. It’s certainly dependent on each persons use case.

  • @Kyanzes
    @Kyanzes Год назад

    Why did you do a full format instead of quick? Just curious.

    • @RaidOwl
      @RaidOwl  Год назад

      Get them all squeaky clean

  • @shephusted2714
    @shephusted2714 Год назад

    a good card for linux raids - team it up with faster networking

  • @davidfrisken1617
    @davidfrisken1617 2 года назад +1

    Are you sure it is a PCI card? Don't they max out at 33MB/s and haven't had slots on motherboards for coming up on about 10 years?

    • @RaidOwl
      @RaidOwl  2 года назад

      PCIe*

    • @rileybaker8294
      @rileybaker8294 2 года назад

      @@RaidOwl Why on earth did you refer to it as PCI throughout the video and in the title? Are you not aware that PCI exists and is meaningfully distinct?

    • @RaidOwl
      @RaidOwl  2 года назад

      ​@@rileybaker8294 Thanks for watching!

  • @axe863
    @axe863 2 года назад +1

    I have a gen 4 samsung 980 pro... realistic unless you're working with insane amounts of data once its over 3 GB/sec theres no way you can tell the difference.

  • @mranthony1886
    @mranthony1886 2 года назад +1

    Doesn't Do PCIe Bifurcation. This is not a raid card so IT mode and IR mode do not matter.

    • @axe863
      @axe863 2 года назад

      Unless you have a threadripper, you'll have a loss

  • @fredster100x
    @fredster100x 2 года назад +1

    Is that PCI or PCIe? You keep switching from one to the other. They are quite different…

    • @RaidOwl
      @RaidOwl  2 года назад

      PCIe. Not much consumer stuff these days is PCI.

  • @thomaseikeberg1753
    @thomaseikeberg1753 3 года назад +3

    This card ran one of my nvme drives at pcie gen 1, untill I changed to the asus hyper card, then all my 2tb mp600 ran at about 18GBs/16GBs as expected (individual disk passthrough to windows 10 vm). Tested this on two seperate systems, one asrock rack x570 with ryzen 5950x and one Asrock rack RomeD8 with an epyc rome cpu, same result. Bottom line, you get what you pay for.

    • @RaidOwl
      @RaidOwl  3 года назад +1

      Yeah these cheap AliExpress cards are definitely hit or miss.

    • @MD-en3zm
      @MD-en3zm 3 года назад +1

      I tried something similar with the Asus on my X570, but it didn’t work because you need bifurcation and because the Ryzen platform is really limited in lanes. I’m working on an epyc naples file server, and trying again - hopefully will have better results.

  • @thatdiyguyraymondmonk1225
    @thatdiyguyraymondmonk1225 3 года назад +1

    Xiwai sounds more like “she way”

    • @RaidOwl
      @RaidOwl  3 года назад

      Yeah that makes sense lol

  • @Burnman83
    @Burnman83 6 месяцев назад

    Wow, urm, that was kinda worthless without showing what it actually did to increase performance.
    Also, just 2 thoughts: 1. How about testing the windows array performance vs. just throwing them into a ZFS array in proxmox to hand through to windows again and test it then? 2. You added this to a plex array. You can only lose what you are currently writing as... you know, ZFS is a cow. On a plex volume you would be writing large video files only that you can just copy once more if it fails. Thus, a stripe would have made more sense for you than a mirror, if your network is fast enough.

  • @SalvatorePellitteri
    @SalvatorePellitteri 3 года назад +1

    ssd are not 30 time faster than hdd, measuring iops it's 10000s of times faster

    • @RaidOwl
      @RaidOwl  3 года назад

      Depending on what you’re doing, sure

  • @parranoic
    @parranoic Год назад

    It disturbs me terribly that you use Edge on a Mac

  • @user-zg6zm3cw7y
    @user-zg6zm3cw7y 3 месяца назад

    PCI or PCIe!

  • @ultramega8792
    @ultramega8792 Год назад

    You Need To make your Own Card, it needs to just Hook up the Drives in a CrossFire, Can You Try That ? if you Build it they will Come. Remember to Cut me in, I thought of it !

  • @charlesdean03
    @charlesdean03 2 года назад

    You talked about all the pros but not the cons of ssd drives. You have more limitations of reads/writes as ssds compared to plate drives (HDD) are like half life or less. Then the biggest factor where SSD is far away and then world will keep using HDD is the space if your Nas is going over 24tb and I am being conservative on the raidz type. You will have to pay an arm and a leg , maybe selling one of your balls also to build that freenas server or anytime of storage server.

    • @RaidOwl
      @RaidOwl  2 года назад +2

      These are cache drives not storage drives. If you are watching a video about TrueNAS I am assuming you know the pros/cons vs HDDs and SSDs.

  • @creed5248
    @creed5248 Год назад

    NVME brings back the raid 0 fun ...

  • @GuillaumeLortie
    @GuillaumeLortie 3 года назад

    its now 80$ USD

  • @creed5248
    @creed5248 Год назад

    Those drives are dirt cheap this year ... LoL !!

  • @jmonsted
    @jmonsted 2 года назад

    The price of that card has gone waaaaay up.

    • @RaidOwl
      @RaidOwl  2 года назад

      I know…it’s sad

  • @shephusted2714
    @shephusted2714 Год назад

    with 4 nvme drives you should have gotten like 10g/10g on r/w i/o test - the card was probably pci-e v2 - more than likely this explains subpar performance, additionally for caching optane is the best - lower latency and higher endurance but overall you get an A for effort, mostly...this stuff is complex and not easy - thanks for being an early optimization adopter! try to do an update to this vid with another card and some small optane drives - they are eol but still available

  • @TheRealMrGuvernment
    @TheRealMrGuvernment Год назад

    Unless your ARC is getting full, there is no reason for L2ARC...and especially using a crap $25 card. TrueNAS has to be treated as a system you need to trust your data on. Using "desktop" parts just takes away from the reliability of TrueNAS and ZFS.

  • @user56
    @user56 2 года назад +3

    using microsoft edge on mac.. you really want to see the world burn, don't you?

    • @RaidOwl
      @RaidOwl  2 года назад +1

      I pour the milk before the cereal too

    • @ForstHeld
      @ForstHeld 2 года назад +2

      @@RaidOwl You are going to far. :)

  • @Luagsch
    @Luagsch Год назад

    PCI Card is so 1995...

  • @scalamasterelectros3204
    @scalamasterelectros3204 2 года назад

    For some wierd reason sata sumsung hardrives are shit but sumsung ssds are good wd is th oposite

  • @bananafartmanmd8775
    @bananafartmanmd8775 Год назад

    Dude please stop calling it “IT mode” there is no such mode that exists. “IT” is just an LSI firmware for passing disks through individually to the Host. This card is not running IT LSI firmware.

  • @m4nc1n1
    @m4nc1n1 2 года назад

    $55 now :(

  • @LinuxMaster9
    @LinuxMaster9 9 месяцев назад

    those are some crap nvme drives.

  • @Ender_Wiggin
    @Ender_Wiggin 3 года назад

    wish it was still 25 bucks

    • @RaidOwl
      @RaidOwl  3 года назад

      Yeah I can’t believe it went up so much.

  • @rileybaker8294
    @rileybaker8294 2 года назад

    Really low quality video.

    • @RaidOwl
      @RaidOwl  2 года назад

      I agree, dude sucks

  • @psycl0ptic
    @psycl0ptic 2 года назад

    I'd strongly not recommend putting all that storage & VMs on a $25 unknown brand pcie card. feels just a bit risky :)

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 2 года назад

    deleted my comment... interesting..this guy clearly has no insight into the ZFS filesystem, his cache strategy was laughable and made no sense, thats why he does not show us the cache results.

    • @RaidOwl
      @RaidOwl  2 года назад

      Yeah dude sucks

  • @ppal64
    @ppal64 2 года назад +2

    Nobody speaks Chinese. Just like no one speaks European.
    Mandarin -
    Wu - Spoken mostly in the coast of Shanghai.
    Yue - This one is not intelligible by speakers of other dialects and is known as Cantonese.
    Xiang - spoken in Southern China’s Huan province, it is intelligible with Mandarin knowledge.
    Min - Spoken in the Fujian province, as well as Taiwan.
    Gan - Spoken in a variety of provinces, including Jiangxi and Fujian. It is also known as Kan.
    Hakka